Member since
10-29-2015
25
Posts
0
Kudos Received
0
Solutions
05-17-2018
01:13 AM
@Brian Burton @cjervis , I really cant thank you both enough for quick response and info. It worked .
... View more
05-16-2018
11:07 AM
@Brian Burton Below is the One that i downloaded from Oracle website looks like this . So should i remove the previous policy jar and replace with the below . ? Could you let me know . -rw-r--r--@ 1 matt staff 7289 May 31 2011 README.txt
-rw-rw-r--@ 1 matt staff 2487 May 31 2011 US_export_policy.jar
-rw-rw-r--@ 1 matt staff 2500 May 31 2011 local_policy.jar
... View more
05-16-2018
11:04 AM
@Brian Burton I performed ls on the java folder , the number is aint matching . i am sorry could you please take a look of the output see if that is unlimited strength ones that i need to enable AES256 .
... View more
05-16-2018
06:38 AM
Thanks for the quick turn around. As pointed in my previous thread , i had showed the snap shot that I had my policy jar in place . But it is still erroring out more over Cloudera Quickstart Vm do comes with Policy jars inside jre/lib/security . i would really appreciate if anyone can help me , Because it is quickstart vm thought someone would pitch in . @cjervis Matt
... View more
05-14-2018
10:15 AM
I have the policy jar in the right directory [cloudera@quickstart jdk1.7.0_67-cloudera]$ cd jre/lib/security/
[cloudera@quickstart security]$ ls
blacklist java.policy local_policy.jar
cacerts java.security trusted.libraries
javafx.policy javaws.policy US_export_policy.jar This the debug trace [cloudera@quickstart jdk1.7.0_67-cloudera]$ export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
[cloudera@quickstart jdk1.7.0_67-cloudera]$ hadoop fs -ls
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
>>>KinitOptions cache name is /tmp/krb5cc_501
>>>DEBUG <CCacheInputStream> client principal is hdfs@HADOOPSEC.COM
>>>DEBUG <CCacheInputStream> server principal is krbtgt/HADOOPSEC.COM@HADOOPSEC.COM
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Mon May 14 10:25:41 PDT 2018
>>>DEBUG <CCacheInputStream> start time: Mon May 14 10:25:41 PDT 2018
>>>DEBUG <CCacheInputStream> end time: Tue May 15 10:25:41 PDT 2018
>>>DEBUG <CCacheInputStream> renew_till time: Mon May 21 10:25:41 PDT 2018
>>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL;
>>>DEBUG <CCacheInputStream> client principal is hdfs@HADOOPSEC.COM
>>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/HADOOPSEC.COM@HADOOPSEC.COM
>>>DEBUG <CCacheInputStream> key type: 0
>>>DEBUG <CCacheInputStream> auth time: Wed Dec 31 16:00:00 PST 1969
>>>DEBUG <CCacheInputStream> start time: null
>>>DEBUG <CCacheInputStream> end time: Wed Dec 31 16:00:00 PST 1969
>>>DEBUG <CCacheInputStream> renew_till time: null
>>> CCacheInputStream: readFlags()
>>> unsupported key type found the default TGT: 18
18/05/14 10:27:37 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/05/14 10:27:37 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
18/05/14 10:27:37 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "quickstart.cloudera/192.168.19.131"; destination host is: "quickstart.cloudera":8020; My klist [cloudera@quickstart jdk1.7.0_67-cloudera]$ klist -e
Ticket cache: FILE:/tmp/krb5cc_501
Default principal: hdfs@HADOOPSEC.COM
Valid starting Expires Service principal
05/14/18 10:25:41 05/15/18 10:25:41 krbtgt/HADOOPSEC.COM@HADOOPSEC.COM
renew until 05/21/18 10:25:41, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
... View more
05-14-2018
10:05 AM
Can anyone explain whats the issue based on the error log from Namenode .
Also because of this issue , Namenode is in Safe Mode.
I had manually configured Kerberos and made the Cluster kerberoised using Cloudera Manager
Its really driving me insane . I am really tired of configuring kerberos in QuickStart
Socket Reader #1 for port 8020: readAndProcess from client 192.168.19.131 threw exception [javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)]]
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
03-27-2017
10:16 PM
i added the below in the my.cnf file bind-address = 0.0.0.0 also gave the full privilege to the user. i am not sure which one solved the issue. but now i am able to login from remotely. i think mentioning 0.0.0.0 is unsafe - because it will let all the ip-adr. please let me know your thoughts folks.
... View more
- Tags:
- add
03-27-2017
09:46 PM
Hello I just started learning sqoop in pseudo mode , Now I am trying to install sqoop in multinode cluster for the testing purpose , I stuck in the middel please help me. I have 4 nodes - with YARN ,hdfs configuraton and all the dameons. Now i install mysql in one node , i am sure when it is not a uber job it will spread across the nodes having said that how will access the mysql database that is installed in master node. Look into the few forums for making mysql accessible from the remote host. inspite of doing all the below still it says user@192.165.123.2 does not have access to the host. 1 . addedd the bind address in /mysql/my.cnf file 2 . gave all the privieges to user@192.165.123.2 in mysql as root. 3. flushed privileges. 4.all the other host have jdbc driver connections in the sqoop client. is there any blog that gives a step by step installation sqoop in multinode cluster. host 1 has sqoop installed and mysql host 2 - i am executing the below sqoop list-tables \
--connect jdbc:mysql://MasterNode/movieData \
--username user1
--password pass
... View more
Labels:
- Labels:
-
Apache Sqoop
02-19-2016
09:52 AM
I am not getting it , .ClassNotFoundException: sounds more like a missing jar to me . I use the below version , it works fine . Sqoop 1.4.4-cdh5.0.0 Hive 0.12.0-cdh5.0.0 wil dig more and let you know if I come up with anything.sorry
... View more
02-14-2016
07:09 PM
Did you try passing the --config sorry to tell you , please makesure you have hive-site.xml file localted inside the hive conf directory. Could you please put your command line in the code section like the below . you will find an icon like this in the edit windown {i} - click on em and put your code in the pop up window. thanks Put your command line here for readability
... View more
- Tags:
- l
10-29-2015
11:28 AM
Just curious to know your system details Like how much ram and core do you have . Could you tell me
... View more