Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2446 | 04-27-2020 03:48 AM | |
4883 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3220 | 04-13-2020 08:53 PM | |
4925 | 03-31-2020 02:10 AM |
06-12-2019
12:36 AM
@Michael Klaene In a kerberized cluster you will need to first post your "kdc.admin.credential" to Ambari and then try your other API calls like Example: (Please provide your KDC principal and credentials there. # curl -s -H "X-Requested-By:ambari" --user admin:admin -i -X POST -d '{ "Credential" : { "principal" : "admin/admin", "key" : "admin", "type" : "temporary" } }' http://my-ambari-host:8080/api/v1/clusters/my-cluster/credentials/kdc.admin.credential Then try your API calls: # curl --user admin:admin -i -H "X-Requested-By:ambari" -X PUT -d '{"HostRoles": {"state":"INSTALLED"}}' http://my-ambari-host:8080/api/v1/clusters/my-cluster/hosts/my-host/host_components/ATLAS_CLIENT . The temporary credential store is a keystore in memory where each entry is removed after 90 minutes (from initial creation), when Ambari is restarted, or by user request. The persisted credential store is a keystore stored on disk where each entry is removed only by user request. The option to store a credential in the persisted store is only available if Ambari's credential store has been setup. Please refer to the following HCC Article as well: https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html
... View more
06-11-2019
10:52 AM
@Adil BAKKOURI We see the following message seems to be causing the DataNode startup failure. 2019-06-11 12:30:52,832 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data
java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851 Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. Please copy the clusterID from nematode "<dfs.namenode.name.dir>/current/VERSION" and put it in the VERSION file of datanode "<dfs.datanode.data.dir>/current/VERSION" and then try again. Also please check the following link: https://community.hortonworks.com/questions/79432/datanode-goes-dows-after-few-secs-of-starting-1.html
... View more
06-11-2019
10:21 AM
@Adil BAKKOURI Based on your Latest Logs it looks like after changing the permissions the NameNode is starting fine without any issue. For the DataNode not starting issue please open a Separate HCC thread and Mark this HCC thread as Answered by clicking on "Accept" link on correct answer.
... View more
06-11-2019
09:50 AM
@Adil BAKKOURI For changing file/dir permission you can use standard Unix/Linux commands something like following: # chown -R hdfs:hadoop /hadoop/hdfs/namenode/current/
(OR)
# chown -R hdfs:hadoop /hadoop/hdfs/namenode .
... View more
06-11-2019
08:57 AM
@Adil BAKKOURI Thank you for sharing the NameNode logs. Base don the logs we can see that your NameNode is not starting successfully because of the following error: 2019-06-07 10:29:46,758 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading fsimage
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243) . Hence please check what is the permission set for various files and directories of your Hadoop installation. Ideally they should be owned by "hdfs:hadoop" something like following: [root@newhwx1 ~]# ls -lart /hadoop/hdfs/namenode/current/VERSION
-rw-r--r--. 1 hdfs hadoop 206 Jun 6 06:13 /hadoop/hdfs/namenode/current/VERSION . So i am suspecting thet you might have mistakenly started your NameNode with some other user (like root user or some other user) and hence your installation files permissions might got changed. Please fix those file permissions and then try to restart the NameNode again. Please check the permissions of all the directory contents: Are these directories permission are same as the user who is running the HDFS NameNode? (ideally in HDP installation they are owned by "hdfs:hadoop" # ls -lart /hadoop/hdfs/namenode/current/
# ls -ld /hadoop/hdfs/namenode/current/
# ls -ld /hadoop/hdfs/namenode
# ls -ld /hadoop/hdfs
# ls -ld /hadoop .
... View more
06-11-2019
06:35 AM
1 Kudo
@Alampally Vinith May be you can invoke the Ambari API call to find the current HDFS Configs and then grep the configs that you want. Example Finding DataNode Http Port. # curl -s -u admin:admin -H "X-Requested-By:ambari" -X GET "http://$AMBARI_HOSTNAME:8080/api/v1/clusters/$CLUSTER_NAME/configurations/service_config_versions?service_name=HDFS%26is_current=true" | grep "dfs.datanode.http.address" | awk -F"\"" '{print $4}' Example Finding NameNode Http Port. # curl -s -u admin:admin -H "X-Requested-By:ambari" -X GET "http://$AMBARI_HOSTNAME:8080/api/v1/clusters/$CLUSTER_NAME/configurations/service_config_versions?service_name=HDFS%26is_current=true" | grep "dfs.namenode.http-address" | awk -F"\"" '{print $4}' . Also if you have the filesystem access then you can get the configs properties values from the following file: # grep -A1 'dfs.namenode.http-address' /etc/hadoop/conf/hdfs-site.xml
# grep -A1 'dfs.datanode.http.address' /etc/hadoop/conf/hdfs-site.xml . Another option will be to get the output of the following commands: # hdfs getconf -confKey "dfs.datanode.http.address" .
... View more
06-11-2019
04:24 AM
@Monalisa Tripathy Did that answer your query? If yes, then please mark this thread as answered by clicking on the "Accept" button .. If you have any additional query then please post back.
... View more
06-11-2019
04:21 AM
@Rohit Sharma Did you find that the below query returned any record in your Ambari DB ? If, yes then that explains about the issue and the above shared article should help in getting it fixed. select * from alert_current where history_id not in (select alert_id from alert_history); .
... View more
06-11-2019
04:19 AM
@John Are you still getting the same error? Did it resolve the issue?
... View more
06-11-2019
04:12 AM
@Nani Bigdata Did that answer your query? It will be a good community practice to mark the answer as "Accept" if it answers your query or Post back if you have any additional query.
... View more