Member since
01-14-2020
6
Posts
0
Kudos Received
0
Solutions
04-14-2020
08:58 PM
Hi , The files you mentioned are not available on the node on which the atlas has been installed. Can you please suggest.
... View more
04-07-2020
11:28 AM
Hi,
I am trying to move all the master node services to a new host and decommission the old node. I am able to do few of the services through Ambari UI by using move option under service actions. And for the services that does not have the move option we should use the Ambari APIs. If i am right can i get some help regarding how to move the service through API method. I want to move all the services from that master node and decommission it once all the services on it has been moved to the new node.
... View more
02-28-2020
07:55 AM
Hi All,
I would like know regarding how can i verify or validate the SSL certificates expiry for all the hadoop components installed on my HDP cluster. Would really appreciate getting any help on methods to check the SSL certificates. Thanks
... View more
- Tags:
- certificate
- HDP
- ssl
Labels:
01-15-2020
08:49 AM
I have a cluster of 2 NN and 4 DN. And one of the data node is down for some reason. It does not after multiple tries from Ambari UI. Can any one please suggest how to bring my data node up and running.
Thanks in advance.
... View more
01-14-2020
01:02 PM
@jsensharmaThanks for the quick response. I have learned that it is not a recommended option to increase the percentage for "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage" . And more over the disk space has already reached 98%. I think I need to free some disk space. But was just curious what kind of data can I be able to clear in the data directory. Any suggestions. 2. Can there be a scenario that a Rouge yarn application is filling up the data in that particular data node. If so how can I check that. Thanks in Advance.
... View more
01-14-2020
11:49 AM
Hi,
In my HDP cluster the Data Directory on one particular Data Node is almost Full (98%), while other data node data directories are below 60%. How can I know why HDFS is writing data into that one particular Data Node. I am worried if this might effect the cluster performance and would like to know how can I distribute the data to different data nodes. Can I use the rebalance hdfs in HDFS > Service Actions. This is also giving node manager unhealthy alert has the threshold limit is set to 90%. If I need to clean up the disk what kind of data do I need to consider. Kindly respond for the issue.
Thanks in advance.
... View more
Labels: