Member since
07-26-2018
25
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3391 | 11-02-2018 01:06 AM |
03-20-2019
06:13 PM
@Felix Albani , I'm currently on HDP 2.6.5 and would like to install or even preferably upgrade to Hive 2.1 . Do you happen to know any documentation on how to perform the install/upgrade? Unfortunately I can not see Hive 2.1 in the list of services that I can add.
... View more
11-02-2018
01:06 AM
After doing some more research on the absence of valid TGT, I found that the issue was really in default_ccache_name set to KEYRING:persistent:%{uid} in krb5.conf . I realized that I'm hitting this specific issue while reading this thread. For whatever reason Hadoop has an issue with the KEYRING. So setting default_ccache_name to FILE has resolved this issue and appropriate TGT are being provided now and NameNode does not take that much time to start and does not fail anymore. My updated parameter looks like like this: default_ccache_name=FILE:/tmp/krb5cc_%{uid} I have also propagated the config file throughout the cluster.
... View more
10-29-2018
03:19 AM
Just wanted to add a couple of notes to the above.. I have just installed Zeppelin Noted to one of the cluster nodes. After the installation I noticed there is a need to restart NameNode, Secondary NameNode and MapReduce2. NameNode was restarting for 30 minutes with exactly the same symptoms as in the above log, but this time it failed. I'm still digging and trying to understand why it is happening, but do have a couple of questions in the meantime: 1. Why there is a need to restart these services after Zeppelin Notes installation. Not sure if I follow what these dependencies are. 2. What could be a reason that TGT is not found?
... View more
10-28-2018
08:41 PM
I have just enabled kerberos on the hadoop cluster. The whole process went fairly smooth. However after I needed to restart all the services I noticed that it over 30 min for the NameNode to start up. During these 30 min it seems that hdfs did not have a valid TGT based on the messages below. After patiently waiting and thinking it is going to fail any moment it in fact DID come up. My question is why it took so long and what was the problem of obtaining TGT from the very beginning? 2018-10-27 23:28:54,899 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://*******:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. 18/10/27 23:28:54 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
safemode: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "*******/11*.11*.11*.11*"; destination host is: "***.***.***":8020;
18/10/27 23:29:09 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] safemode: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "*******/11*.11*.11*.11*"; destination host is: "************":8020;
... View more
Labels:
- Labels:
-
Apache Hadoop
10-26-2018
10:55 PM
@Geoffrey Shelton Okot , I would love to do so, but I can not see that "Accept" button ... Alex
... View more
10-26-2018
02:49 PM
@Jay Kumar SenSharma, Thanks a lot for helping me! Aparently I ran out of inodes. Not sure why it did not occur to me to check it first place ... Anyways reformatting the filesystem and a little bit of file shuffling exercise did the trick 🙂
... View more
10-26-2018
02:42 PM
@Geoffrey Shelton Okot , the official documentation does not list the steps of installing kerberos clients and propagating krb5.conf to all the nodes. Does this mean Ambari tool will propagate krb5.conf and install krb5-workstation for me? I know using Cloudera Manager I have to set up clients as well which makes absolutely perfect sense. I just wanted to know for sure before I execute the wizard.
... View more
10-23-2018
10:44 PM
I'm trying to start the metrics collector , but getting a strange error instead: resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf start' returned 1. Tue Oct 23 18:05:37 EDT 2018 Starting HBase.
starting master, logging to /disks/disk1/log/ambari-metrics-collector/hbase-ams-master-node4.hdp.com.out
/usr/lib/ams-hbase/bin/hbase-daemon.sh: line 189: /disks/disk1/log/ambari-metrics-collector/hbase-ams-master-node4.hdp.com.out: No space left on device
head: cannot open ‘/disks/disk1/log/ambari-metrics-collector/hbase-ams-master-node4.hdp.com.out’ for reading: No such file or directory
/usr/sbin/ambari-metrics-collector: line 81: /disks/disk1/run/ambari-metrics-collector/ambari-metrics-collector.pid: No space left on device
ERROR: Cannot write pid /disks/disk1/run/ambari-metrics-collector/ambari-metrics-collector.pid. it is complaining there is no space on device. /disks/disk1/log/ambari-metrics-collector/ambari-metrics-collector.out shows the same thing: Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /disks/disk1/log/ambari-metrics-collector/collector-gc.log-201810231817 due to No space left on device log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /disks/disk1/log/ambari-metrics-collector/ambari-metrics-collector.log (No space left on device) ...... but # df -h /disks/disk1/ Filesystem Size Used Avail Use% Mounted on /dev/sdb1 4.9G 760M 4.2G 16% /disks/disk1 There is clearly some space there. How much space is really needed to write the output file? Thanks, Alex
... View more
Labels:
- Labels:
-
Apache Ambari