Member since
06-13-2016
4
Posts
3
Kudos Received
0
Solutions
08-10-2016
03:40 PM
2 Kudos
Thank you mageru9 . This worked!! The proper perms are /tmp/logs should be 777 and should have sticky bit set (the 1 in the 1777, this makes all child directories inherit the group of parent dir aka hadoop). sudo -u hdfs hdfs dfs -chmod 1777 /tmp/logs sudo -u hdfs hdfs dfs -chown mapred:hadoop /tmp/logs sudo -u hdfs hdfs dfs -chgrp -R hadoop /tmp/logs After this I would restart your JobHistory server.
... View more
07-13-2016
03:51 PM
1 Kudo
I have the same issue , My hosts in cluster have hostname something like this 192.168.X.X Master 192.168.X.X Slave1 192.168.X.X Slave2 192.168.X.X Slave3 And generated principal names were like hdfs/Master@CLOUDERA spark/Slave1@CLOUDERA And when a data node is started it was looking for hdfs/master@Former Member instead of hdfs/Master@Former Member Resoultion steps: 1)Change HOSTNAME in /etc/sysconfig/network HOSTNAME=master on Master node , HOSTNAME=slave1 on Slave1 node 2)Have all the hosts in cluster maintain same hostname 192.168.X.X master 192.168.X.X slave 3) Reboot all hosts 4) Check for the hostname 5) On Cloudera manager -> For each hosts - > regenrate keytab 6) Go to Administration->Security->KerberosCredentials ->Check the prinicipal names are with correct hosts like hdfs/master@CLOUDERA hdfs/slave1@CLOUDERA
... View more