Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2463 | 04-27-2020 03:48 AM | |
4914 | 04-26-2020 06:18 PM | |
3987 | 04-26-2020 06:05 PM | |
3241 | 04-13-2020 08:53 PM | |
4951 | 03-31-2020 02:10 AM |
10-11-2018
12:46 AM
@Jack Madden If this answers your query/issue then please mark this HCC thread as answered by clicking on "Accept" link on the correct answer, That way it will help other HCC users to quickly find the answers.
... View more
10-10-2018
12:25 PM
@Muhammad Asghar Your Local repository seems to be having a corrupted file "repomd.xml" or might not be accessible. Can you please check if you can access it and verify it's content? # curl http://ufm.hdf03.com/repo/HDF/centos7/3.2.0.0-520/repodata/repomd.xml . The content of the above file some be matching as the one mentioned here: # curl http://public-repo-1.hortonworks.com/HDF/amazonlinux2/3.x/updates/3.2.0.0/repodata/repomd.xml .
... View more
10-10-2018
11:12 AM
@Sateesh Battu From Ambari Server host can you please try running the following command once to see if it is able to put a dummy key/value inside "hivemetastore-site". If the following command succeeds then you should be able to see the "hivemetastore-site" in ambari UI as well. # /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=set --host=localhost --cluster=TestCluster --config-type=hivemetastore-site -k "key1" -v "value1"<br> . Please make sure to run the above command from ambari server host and also Change the cluster name "TestCluster" with your own cluster name.
... View more
10-10-2018
10:18 AM
@Sateesh Battu May be you can take Ambari DB dump backup (this is Must to perform DB backup) and then from Ambari Db you can perform Hive Service cleanup as mentioned in
How to cleanup service from Ambari database: https://community.hortonworks.com/articles/79546/how-to-cleanup-service-from-ambari-database.html Also there are detailed steps mentioned in https://community.hortonworks.com/articles/81939/how-to-resolve-ambari-db-inconsistency-error-you-h.html
... View more
10-10-2018
09:48 AM
@Sateesh Battu It looks like Hive Service is not installed properly. As you mentioned that it is (Fresh Installation ) of Hive Metastore. So it will be better and quick if you Delete the "Hive" service from ambari UI and then try installing it again. The other alternate approach will be to review the whole Hive Service installation logs to find out why the mentioned config is missing and accordingly need to fix it from Ambari DB or using some API calls. But better to reinstall hive service if it is a freshly installed service.
... View more
10-10-2018
12:14 AM
@Jack Madden Are you talking about the following two parameters? dfs.namenode.secondary.https-address (50091) => The secondary namenode HTTPS server address and port.
dfs.namenode.secondary.http-address (50090) => The secondary namenode http server address and port. Reference: https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml . Those properties can be added to "Customer hdfs-site" using "Add Property" option via Ambari: Ambari UI --> HDFS --> Configs --> Advanced --> "Customer hdfs-site" --> "Add Property ..." (click) .
... View more
10-09-2018
07:19 AM
@Surendra Shringi Looks like a Tuning issue. Here we see that the "Requested Memory" is More than the "Max Memory". requestedMemory=10096, maxMemory=7936. . Please refer to the following Support KB to know more about this kind of error: https://community.hortonworks.com/content/supportkb/48814/resource-manager-unable-to-start.html
... View more
10-09-2018
06:41 AM
1 Kudo
@Takefumi Oide In Ambari UI operation log you can not see the operations that are performed by Ambari Internally via Agents. Only user performed operations (explicit operations) can be seen there. The "/usr/lib/ambari-agent/lib/ambari_agent/RecoveryManager.py" is basically responsible for recovery of service components. For example: When we kill AMS collector and if the Auto Restart is enable for this component then we can see the following kind of message in the Agent log to know if the "AUTO_EXECUTION_COMMAND" was performed. # grep 'Adding recovery command START for component' /var/log/ambari-agent/ambari-agent.log
INFO 2018-10-09 06:33:52,324 Controller.py:410 - Adding recovery command START for component METRICS_COLLECTOR
.
.
INFO 2018-10-09 06:33:52,325 ActionQueue.py:113 - Adding AUTO_EXECUTION_COMMAND for role METRICS_COLLECTOR for service AMBARI_METRICS of cluster NewCluster to the queue.
.
INFO 2018-10-09 06:36:25,643 RecoveryManager.py:185 - current status is set to STARTED for METRICS_COLLECTOR
. Or just grep that script: # grep 'RecoveryManager.py' /var/log/ambari-agent/ambari-agent.log
INFO 2018-10-09 06:33:52,310 RecoveryManager.py:255 - METRICS_COLLECTOR needs recovery, desired = STARTED, and current = INSTALLED.
INFO 2018-10-09 06:36:25,643 RecoveryManager.py:185 - current status is set to STARTED for METRICS_COLLECTOR .
... View more
10-09-2018
05:25 AM
@Surendra Shringi Are you able to do Telnet on the HS2 Hostname & Port? From Hive Client host # telnet $HS2_HOSTNAME 10000
(OR)
# nc -v $HS2_HOSTNAME 10000 . On the HS2 host can you please check if the port 10000 is listening and bound properly and the IP tables is disabled? # netstat -tnlpa | grep 10000
# service iptables stop . Also please restart HS2 and then see if you find any Errors in the HS2 log? # ls -l /var/log/hive/hiveserver2.log . Also can you please try with the Zookeeper based dynamic URLs? Like following to see if it works? # beeline -n barney -p bedrock -u "jdbc:hive2://m1.hdp.local:2181,m2.hdp.local:2181,m3.hdp.local:2181/<db>" https://community.hortonworks.com/articles/4103/hiveserver2-jdbc-connection-url-examples.html
... View more
10-05-2018
09:34 AM
@Ranganathan G T Great to know that it helped. It would be also wonderful if you can mark this HCC thread as answered by clicking on the "Accept" button on the helpful answer so that other hcc users can quickly browse the Answered queries.
... View more