Member since
02-18-2016
141
Posts
19
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3485 | 12-18-2019 07:44 PM | |
3515 | 12-15-2019 07:40 PM | |
1352 | 12-03-2019 06:29 AM | |
1371 | 12-02-2019 06:47 AM | |
4096 | 11-28-2019 02:06 AM |
11-18-2019
11:24 PM
@divya_thaore 1. If you are starting service via Ambari / Cloudera Manager UI then check the operation logs which are displayed in UI while you start the service Please check for any errors in the logs clicking the service operational logs. 2. If you do not see any operational logs or any operation triggerred while you start/restart service then kindly restart agent service once [ambari agent/cloudera-scm-agent] 3. Else please check logs from cli as suggested by @Shelton If you still need help please revert.
... View more
11-18-2019
11:11 PM
Hi @Manoj690 Did the earlier community link helped to resolve issue - https://community.cloudera.com/t5/Support-Questions/Ambari-metircs-not-started/m-p/283228#M210525 ?? Please confirm!!
... View more
11-18-2019
11:09 PM
@mike_bronson7 The latest command you posted again has typo. "R" is missing at the end in below command - >>curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" Pls try and pass new error if any.
... View more
11-18-2019
03:48 AM
yes. you can remove pig client using ambari api.
... View more
11-18-2019
02:32 AM
@shyamshaw This is already highlighted in community here by @LesterMartin - https://community.cloudera.com/t5/Support-Questions/Reason-for-Hive-dependency-on-PIg-during-installation-of/td-p/239407 >>>> Probably for using HCatalog with can be extremely useful for Pig programmers even if they don't want to use Hive and just leverage this for schema management instead of defining AS clauses in their LOAD commands? Just as likely this is something hard-coded into Ambari? If you really don't want Hive, I bet you can just delete it after installation. For giggles, I stood up an HDFS-only HDP 3.1.0 cluster for https://community.hortonworks.com/questions/245432/is-it-possible-to-install-only-hdfs-on-linux-mach... and just added Pig (required YARN, MR, Tez & ZK, but that makes sense!) and did NOT require Hive to be added as seen below. Please check the link for full details. Also same you can remove PIG after installation which will not impact your HIVE.
... View more
11-18-2019
02:24 AM
Hi@Manoj690 , Seems your AMS hbase master is not able to start - Please try below steps - In the Ambari Dashboard, go to the 'Ambari Metrics' section and under the 'Service Actions' dropdown click 'Stop'. Check and confirm from backend that ams process is stopped. If the process is still running use below command to stop - # ambari-metrics-collector stop Delete all AMS Hbase data - In the Ambari Dashboard, under the Ambari Metrics section do a search for the following configuration values "hbase.rootdir" Remove entire files from “hbase.rootdir” Eg. #hdfs dfs -cp /user/ams/hbase/* /tmp/ #hdfs dfs -rm -r -skipTrash /user/ams/hbase/* In the Ambari Dashboard, under the Ambari Metrics section do a search for the following configuration values “hbase.tmp.dir”. Backup the directory and remove the data. Eg. #cp /var/lib/ambari-metrics-collector/hbase-tmp/* /tmp/ #rm –fr /var/lib/ambari-metrics-collector/hbase-tmp/* Remove the znode for hbase in zookeeper cli Login to ambari UI -> Ambari Metrics -> Configs -> Advance ams-hbase-site and search for property “zookeeper.znode.parent” #/usr/hdp/current/zookeeper-client/bin/zkCli.sh #rmr /ams-hbase-secure Start AMS Let know if you still have issue
... View more
11-17-2019
11:10 PM
1 Kudo
Hi@Manoj690 1. First check which is the directory configured for dfs data storage. Login to Ambari UI -> Services-> HDFS -> Configs -> [search for dfs.datanode.data.dir] Capture this list of directories defined here. EG. i have list as below - /data01/hadoop/hdfs/data,/data02/hadoop/hdfs/data 2. Login to the datanodes and go to the mount. In my case- $cd /data01/hadoop/hdfs/ 3. Check if there is any directory/data inside "/data01/hadoop/hdfs/" or "/data01/" 4. Other data than "**data" is consider as nondfs and which will show you in dfsadmin -report. 5. You need to get rid of those data which will lesser your NON DFS used. You can share your output if you have any confusion/need help.
... View more
11-17-2019
10:29 PM
Probably i see chronyd command are similar to NTP - you can refer this for debugging - https://www.thegeekdiary.com/centos-rhel-7-tips-on-troubleshooting-ntp-chrony-issues/
... View more
11-17-2019
10:27 PM
Please check this once - Try running "ntpdate ipsap01.ecb.de" on all hosts and check if any issue reported while running this command Make sure chronyd/ntp.conf is same on all nodes hwclock--systohc systemctl restart cloudera-scm-agent Further more if the above wont help then you need to debug ntp server side. Execute below commands - ntpq -c pe The output shown is good, but note that if the refid column indicates ".INIT." it can suggest a communication issue. ntpq -c as The output below is good however if the reach column indicates "no" it suggests that the client cannot reach peer hosts. You probably need to check stratum of your ntp servers - The "assID" from ntpq -c as can be used with command ntpq -c "rv assID" to determine the "stratum". The lower the stratum the better. The upper limit for stratum is 15; stratum 16 is used to indicate that a device is unsynchronized. ntpq -c "rv <association_id_from_above_command_output>"
... View more
11-15-2019
06:10 AM
@mike_bronson7 you just need to backup /hadoop/hdfs/namenode/current from active namenode Also if you backup one week earlier the activity and lets say your first cluster is going serve more request to clients then you will loose that data which was written after backup. So best is to do savenamespace and backup when you are going to do activity and freeze clients not accessing the cluster.
... View more