Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2663 | 12-25-2018 10:42 PM | |
12195 | 10-09-2018 03:52 AM | |
4200 | 02-23-2018 11:46 PM | |
1887 | 09-02-2017 01:49 AM | |
2207 | 06-21-2017 12:06 AM |
02-22-2017
11:07 AM
1 Kudo
Before ZK integration is done you can start 2 or more Thrift servers on different nodes and put a load balancer in front of them. That will do the job.
... View more
02-22-2017
08:37 AM
Thanks, that worked! The bug still appears in Ranger-0.6 included in HDP-2.5.3.
... View more
02-19-2017
12:41 AM
Hi @Sundara Palanki Can you check your solr.hdfs.home and solr.hdfs.confdir. Your solr.hdfs.home looks good, so check confdir, it should point to /etc/hadoop/conf/ and your hadoop conf files like core-site.xml should be there.
... View more
02-18-2017
12:46 AM
You can use MariaDB but I'm not sure can you install Hive without Mysql community repo. If you haven't installed the cluster yet, you can try during the installation from Ambari, in the "Customize services" step to go to Hive -> Configs -> Advanced and set Hive database to "Existing MySql/MariaDB", and see what happens during the installation phase. I haven't tried, and frankly speaking I'd surprised if it wokrs, but you can try.
... View more
02-16-2017
11:02 AM
This worked after checking "The other domain supports Kerberos AES Encryption" check-box on the trusted domain property dialog on AD. So, doing just "ksetup /setenctypeattr AES..." is not enough (this appears only to update a cell in Windows registry). Details here.
... View more
02-15-2017
10:45 PM
Good to know! Then, please considder to upvote and/or accept my first answer above. Tnx!
... View more
02-15-2017
01:34 PM
1 Kudo
Hi @bsaini, you can keep it as an int or float representing Unix timestamp in seconds (float if you want to use sub-second units up to nanosec), or a string. From what I see here: Timestamps are interpreted to be timezoneless and stored as an offset from the UNIX epoch. Convenience UDFs for conversion to and from timezones are provided ( to_utc_timestamp , from_utc_timestamp ).
... View more
02-15-2017
07:55 AM
Hi @Juan Manuel Nieto, thanks for your reply. I tried that, and several other values for "-e", but the error is the same, like below. Btw. when I do "klist -e user1@PQR-NET.COM" it says that encryption is aes256-cts-hmac-sha1-96. getprinc on krbtgt/LOCAL@PQR-NET.COM also returns both aes256-cts-hmac-sha1-96 and aes128-cts-hmac-sha1-96 and some other types. AD runs on Win-2008 and is supposed to support these types. Btw, "default_tgs_enctypes: 18 17 16 23" I found here. 18 stands for aes256-cts-hmac-sha1-96. In the log below, 192.168.120.120 is my AD server. >>> Credentials acquireServiceCreds: main loop: [0] tempService=krbtgt/HDP-NET.COM@PQR-NET.COM
Using builtin default etypes for default_tgs_enctypes
default etypes for default_tgs_enctypes: 18 17 16 23.
>>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType
>>> EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
>>> KrbKdcReq send: kdc=192.168.120.120 UDP:88, timeout=30000, number of retries =3, #bytes=1411
>>> KDCCommunication: kdc=192.168.120.120 UDP:88, timeout=30000,Attempt =1, #bytes=1411
>>> KrbKdcReq send: #bytes read=97
>>> KdcAccessibility: remove 192.168.120.120
>>> KDCRep: init() encoding tag is 126 req type is 13
>>>KRBError:
sTime is Wed Feb 15 16:38:11 JST 2017 1487144291000
suSec is 949340
error code is 14
error Message is KDC has no support for encryption type
sname is krbtgt/HDP-NET.COM@PQR-NET.COM
msgType is 30
>>> Credentials acquireServiceCreds: no tgt; searching thru capath
>>> Credentials acquireServiceCreds: inner loop: [1] tempService=krbtgt/LOCAL@PQR-NET.COM
...
... View more
02-15-2017
06:45 AM
That's strange. Can you check 2 more places: (1) on Ambari Dashboard, put the mouse over "HDFS disk usage", what do you see there for DFS used, non DFS used, and remaining? (2) HDFS -> Quick Links -> NN UI, what's your "Configured capacity"?
... View more
02-14-2017
10:10 PM
1 Kudo
In HDFS -> Configs, check have you assigned your disks as NameNode and DataNode directories. In particular, in DataNode dirs. you should have one directory for each of your disks you want to be used for HDFS. In your case 10-11 of them, all except the one for the OS. Ambari is aware only of disk space assigned in this way.
... View more