Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14824 | 01-18-2018 08:38 AM | |
1570 | 05-11-2017 06:50 PM | |
9186 | 04-28-2017 11:00 AM | |
3437 | 04-12-2017 01:36 AM | |
2832 | 02-14-2017 05:11 AM |
03-31-2014
01:46 AM
1 Kudo
Hi, after disabling Kerberos the HBase Master won't start because no access to zookeeper znode /hbase/shutdown. I tried to remove it in zookeeper shell (started as user root), but no success => [zk: localhost:2181(CONNECTED) 3] rmr /hbase/shutdown Authentication is not valid : /hbase/shutdown [zk: localhost:2181(CONNECTED) 4] getAcl /hbase/shutdown 'sasl,'hbase : cdrwa [zk: localhost:2181(CONNECTED) 5] How can I forcibly deltete that subtree to be able to start HBase afterwards? Error in HBase Master log: 2014-03-31 10:23:41,760 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: master:60000-0x4451714a72b004b Unable to get data of znode /hbase/shutdown org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase/shutdown thanks in advance...Gerd...
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Zookeeper
03-31-2014
12:17 AM
Hi Darren, GOTCHA 😉 reverting both ports back to default solved the problem, many thanks !
... View more
03-28-2014
08:00 AM
Hi, I disabled Kerberos (setting Service HDFS => Configuration => Authentication type "simple") while all service are stopped. Afterwards I wanted to start service HDFS, but the Datanodes fail with the error: Exception in secureMain java.io.IOException: Failed on local exception: java.net.SocketException: Permission denied; Host Details : local host is: "hadoop-pg-4.cluster"; destination host is: (unknown):0; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763) at org.apache.hadoop.ipc.Server.bind(Server.java:403) at org.apache.hadoop.ipc.Server.bind(Server.java:375) at org.apache.hadoop.hdfs.net.TcpPeerServer.<init>(TcpPeerServer.java:106) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:555) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:741) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:344) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1795) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1728) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1925) Caused by: java.net.SocketException: Permission denied at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:386) ... 10 more on the datanodes theres is no hadoop related process running and nothing is listening on the required ports. How do I successfully disable Kerberos and start my cluster afterwards? CDH4.6 / CM4.8 regards, Gerd
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
-
Kerberos
03-04-2014
05:24 AM
Hi Chris, thanks, restarting the MGMT services solved the test issues. bye...Gerd...
... View more
03-04-2014
02:35 AM
Hi, after enabling Kerberos security and restarting the cluster, the status of service HDFS shows "Bad", despite the service instances are all "Good". Seems like just the service checks are failing, error messages from CM: 1.) No connection to determine the active NameNode could be made for the last 3 minute(s)... 2.) Canary test failed to create parent directory for /tmp/.cloudera_health_monitoring_canary_files. and even the Namenode UI cannot be opened successfully => after clicking "Namenode WebUI" (which results in http://hadoop-pg:50070) in service view I receive " HTTP ERROR 401 Problem accessing /index.html. Reason: " How to solve the described issues a) eliminate failures of hdfs service checks, and b) being unable to open Namenode-UI ?? thanks in advance, Gerd
... View more
Labels:
02-26-2014
03:31 AM
9 Kudos
STUPID ME 😉 Re-checking the installation of the JCE files brought me on the right track. Executing the hadoop-command on the shell was using "old" Java6 and I installed the JCE files just for Java7, since I configured in CM JAVA_HOME to use Java7. A simple "export JAVA_HOME=/usr/lib/jvm/java-7-oracle/jre" before executing "hadoop dfs ..." on the shell solved this issue.
... View more
02-26-2014
02:18 AM
Tried a different approach, sadly resulting in the same problem/error. I tried to use the hdfs user-principal created by ClouderaManager to submit a hdfs command on the shell, but I still get this "unsupported key type found the default TGT: 18" Log: === #>su - hdfs #>export HADOOP_OPTS="-Dsun.security.krb5.debug=true" #>kinit -k -t /var/run/cloudera-scm-agent/process/1947-hdfs-DATANODE/hdfs.keytab hdfs/hadoop-pg-7.cluster #>kinit -R #>hadoop dfs -ls /user DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Config name: /etc/krb5.conf >>>KinitOptions cache name is /tmp/krb5cc_998 >>>DEBUG <CCacheInputStream> client principal is hdfs/hadoop-pg-7.cluster@HADOOP-PG >>>DEBUG <CCacheInputStream> server principal is krbtgt/HADOOP-PG@HADOOP-PG >>>DEBUG <CCacheInputStream> key type: 18 >>>DEBUG <CCacheInputStream> auth time: Wed Feb 26 11:07:49 CET 2014 >>>DEBUG <CCacheInputStream> start time: Wed Feb 26 11:07:55 CET 2014 >>>DEBUG <CCacheInputStream> end time: Thu Feb 27 11:07:55 CET 2014 >>>DEBUG <CCacheInputStream> renew_till time: Wed Mar 05 11:07:49 CET 2014 >>> CCacheInputStream: readFlags() FORWARDABLE; PROXIABLE; RENEWABLE; INITIAL; >>> unsupported key type found the default TGT: 18 14/02/26 11:08:07 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 14/02/26 11:08:07 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 14/02/26 11:08:07 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-7.cluster/10.147.210.7"; destination host is: "hadoop-pg-2.cluster":8020; ~~~~~~~ #>klist -ef Ticket cache: FILE:/tmp/krb5cc_998 Default principal: hdfs/hadoop-pg-7.cluster@HADOOP-PG Valid starting Expires Service principal 02/26/14 11:08:21 02/27/14 11:08:21 krbtgt/HADOOP-PG@HADOOP-PG renew until 03/05/14 11:07:49, Flags: FPRIT Etype (skey, tkt): AES-256 CTS mode with 96-bit SHA-1 HMAC, AES-256 CTS mode with 96-bit SHA-1 HMAC Now what?
... View more
02-25-2014
06:17 AM
2 Kudos
Hi, after enabling Kerberos security on the cluster (related guideline here) I got stuck at step 15 (Create the hdfs Super User Principal). At the end I am not able to execute a hadoop command as user hdfs from the cmd-line, like "sudo -u hdfs hadoop dfs -ls /user". After reading some doc's and sites I verified that I have installed the Java security jar's and that the krbtgt principal doesn't have the attribute "requires_preauth". Problem: ======= execution of sudo -u hdfs hadoop dfs -ls /user fails with error: "" root@hadoop-pg-2:~# sudo -u hdfs hadoop dfs -ls /user DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 14/02/25 14:32:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 14/02/25 14:32:10 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 14/02/25 14:32:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-2.cluster/10.147.210.2"; destination host is: "hadoop-pg-2.cluster":8020; "" Previous steps ============ 1. create hdfs principal via kadmin: addprinc hdfs@HADOOP-PG 2. obtain a tgt for user hdfs: kinit hdfs@HADOOP-PG 3. check: klist -f root@hadoop-pg-2:~# klist -f Ticket cache: FILE:/tmp/krb5cc_0 Default principal: hdfs@HADOOP-PG Valid starting Expires Service principal 02/25/14 14:30:32 02/26/14 14:30:29 krbtgt/HADOOP-PG@HADOOP-PG renew until 03/04/14 14:30:29, Flags: FPRIA => thereby I assume authentication for user hdfs worked nicely, since at creation of principal and obtaining the tgt the provided password was accepted....and a tgt was created successfully 4. execute the Hadoop command mentioned above.......results in the error shown above 😞 5. try to renew the ticket: kinit -R . Execute successfully 6. repeat step 4. => same error 7. enable Kerberos debug output and try to run 4. Log: "" root@hadoop-pg-2:~$ su - hdfs
hdfs@hadoop-pg-2:~$ kinit
Password for hdfs@HADOOP-PG:
hdfs@hadoop-pg-2:~$ klist
Ticket cache: FILE:/tmp/krb5cc_996
Default principal: hdfs@HADOOP-PG
Valid starting Expires Service principal
02/25/14 14:55:26 02/26/14 14:55:26 krbtgt/HADOOP-PG@HADOOP-PG
renew until 03/04/14 14:55:26
hdfs@hadoop-pg-2:~$ hadoop dfs -ls /user
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Config name: /etc/krb5.conf
>>>KinitOptions cache name is /tmp/krb5cc_996
>>>DEBUG <CCacheInputStream> client principal is hdfs@HADOOP-PG
>>>DEBUG <CCacheInputStream> server principal is krbtgt/HADOOP-PG@HADOOP-PG
>>>DEBUG <CCacheInputStream> key type: 18
>>>DEBUG <CCacheInputStream> auth time: Tue Feb 25 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> start time: Tue Feb 25 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> end time: Wed Feb 26 14:55:26 CET 2014
>>>DEBUG <CCacheInputStream> renew_till time: Tue Mar 04 14:55:26 CET 2014
>>> CCacheInputStream: readFlags() FORWARDABLE; PROXIABLE; RENEWABLE; INITIAL;
>>>DEBUG <CCacheInputStream> client principal is hdfs@HADOOP-PG
>>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/HADOOP-PG@HADOOP-PG
>>>DEBUG <CCacheInputStream> key type: 0
>>>DEBUG <CCacheInputStream> auth time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> start time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> end time: Thu Jan 01 01:00:00 CET 1970
>>>DEBUG <CCacheInputStream> renew_till time: Thu Jan 01 01:00:00 CET 1970
>>> CCacheInputStream: readFlags()
>>> unsupported key type found the default TGT: 18
14/02/25 14:55:40 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:55:40 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
14/02/25 14:55:40 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
ls: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "hadoop-pg-2.cluster/10.147.210.2"; destination host is: "hadoop-pg-2.cluster":8020; "" The message unsupported key type found the default TGT: 18 makes me thinking of missing the Java strong crypto files,but I copied the jar's US_export_policy.jar and local_policy.jar into the folder /usr/lib/jvm/java-7-oracle/jre/lib/security => hdfs@hadoop-pg-2:/usr/lib/jvm$ ls -al /usr/lib/jvm/java-7-oracle/jre/lib/security/
total 140
drwxr-xr-x 2 root root 4096 Jan 31 10:30 .
drwxr-xr-x 16 root root 4096 Jan 31 10:30 ..
-rw-r--r-- 1 root root 2770 Jan 31 10:30 blacklist
-rw-r--r-- 1 root root 82586 Jan 31 10:30 cacerts
-rw-r--r-- 1 root root 158 Jan 31 10:30 javafx.policy
-rw-r--r-- 1 root root 2593 Jan 31 10:30 java.policy
-rw-r--r-- 1 root root 17838 Jan 31 10:30 java.security
-rw-r--r-- 1 root root 98 Jan 31 10:30 javaws.policy
-rw-r--r-- 1 root root 2500 Feb 21 15:41 local_policy.jar
-rw-r--r-- 1 root root 0 Jan 31 10:30 trusted.libraries
-rw-r--r-- 1 root root 2487 Feb 21 15:41 US_export_policy.jar I have no idea what to check next, any help appreciated 🙂 (I want to avoid removing AES256 from being supported by Kerberos and thereby recreate all principals or even creating a new Kerberos db...
... View more
Labels:
02-25-2014
12:26 AM
Hi Tgrayson, thanks for your answer. Seems like adding the {ticket_|renew_}lifetime parameters solved the problem. After inserting them, reducing the original renew-lifetime to 7d and restarting all the services it looks good and I can proceed with the doc mentioned in the initial post. thanks, Gerd
... View more
02-24-2014
07:14 AM
Hi, I am currently in the process of enabling security in our cluster (CDH4.5, CM4.8) according the documentation here => http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.5.4/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html Everything went fine until step 14, starting all the services. The service "Kerberos Ticket Renewer" doesn't start, the latest log entries are: "" [24/Feb/2014 15:41:39 +0000] settings INFO Welcome to Hue 2.5.0
[24/Feb/2014 15:41:40 +0000] kt_renewer INFO Reinitting kerberos from keytab: /usr/bin/kinit -k -t /var/run/cloudera-scm-agent/process/1715-hue-KT_RENEWER/hue.keytab -c /tmp/hue_krb5_ccache hue/hadoop-pg-1.cluster
[24/Feb/2014 15:41:42 +0000] kt_renewer INFO Renewing kerberos ticket to work around kerberos 1.8.1: /usr/bin/kinit -R -c /tmp/hue_krb5_ccache
[24/Feb/2014 15:41:42 +0000] kt_renewer ERROR Couldn't renew kerberos ticket in order to work around Kerberos 1.8.1 issue. Please check that the ticket for 'hue/hadoop-pg-1.cluster' is still renewable:
$ kinit -f -c /tmp/hue_krb5_ccache
If the 'renew until' date is the same as the 'valid starting' date, the ticket cannot be renewed. Please check your KDC configuration, and the ticket renewal policy (maxrenewlife) for the 'hue/hadoop-pg-1.cluster' and `krbtgt' principals. "" The logs of the KDC shows: "" Feb 24 15:41:33 hadoop-pg-1 krb5kdc[4475](info): AS_REQ (4 etypes {18 17 16 23}) 10.147.210.1: NEEDED_PREAUTH: hue/hadoop-pg-1.cluster@HADOOP-PG for krbtgt/HADOOP-PG@HADOOP-PG, Additional pre-authentication required Feb 24 15:41:33 hadoop-pg-1 krb5kdc[4475](info): AS_REQ (4 etypes {18 17 16 23}) 10.147.210.1: ISSUE: authtime 1393252893, etypes {rep=18 tkt=18 ses=18}, hue/hadoop-pg-1.cluster@HADOOP-PG for krbtgt/HADOOP-PG@HADOOP-PG Feb 24 15:41:35 hadoop-pg-1 krb5kdc[4475](info): TGS_REQ (4 etypes {18 17 16 23}) 10.147.210.1: TICKET NOT RENEWABLE: authtime 0, hue/hadoop-pg-1.cluster@HADOOP-PG for krbtgt/HADOOP-PG@HADOOP-PG, KDC can't fulfill requested option Feb 24 15:41:35 hadoop-pg-1 krb5kdc[4475](info): TGS_REQ (4 etypes {18 17 16 23}) 10.147.210.1: TICKET NOT RENEWABLE: authtime 0, hue/hadoop-pg-1.cluster@HADOOP-PG for krbtgt/HADOOP-PG@HADOOP-PG, KDC can't fulfill requested option "" The KDC config looks like: "" [kdcdefaults] kdc_ports = 750,88 [realms] HADOOP-PG = { database_name = /var/lib/krb5kdc/principal admin_keytab = FILE:/etc/krb5kdc/kadm5.keytab acl_file = /etc/krb5kdc/kadm5.acl key_stash_file = /etc/krb5kdc/stash kdc_ports = 750,88 max_life = 1d 0h 0m 0s max_renewable_life = 90d 0h 0m 0s master_key_type = des3-hmac-sha1 supported_enctypes = aes256-cts:normal arcfour-hmac:normal des3-hmac-sha1:normal des-cbc-crc:normal des:normal des:v4 des:norealm des:onlyrealm des:afs3 default_principal_flags = +preauth +renewable } "" Additionally I set the following: "" kadmin.local: modprinc -maxlife "1 day" -maxrenewlife "90 day" +allow_renewable hue/hadoop-pg-1.cluster@HADOOP-PG "" Some hints, where to investigate to resolve this issue? br, Gerd
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Hue
-
Kerberos
-
Security