I had enable HiveServer2 Interactive on hdp2.6.2, later i disabled it and try to restart hive, but HiveServer2 is not able to start.
The error as attached file: hive2-error.txt
Could anyone advise how to fix it?
Your HDFS seems to be running too slow. Please check the HDFS logs (Specially the NN & DN logs)
While trying to start the Hive process ambari is trying to copy few resources to HDFS for testing and we see that it is taking too long time:
# grep 'copy' /Users/jsensharma/Downloads/43441-hive2-error.txt 2017-11-06 16:15:16,149 - Called copy_to_hdfs tarball: mapreduce 2017-11-06 16:16:56,458 - DFS file /hdp/apps/126.96.36.199-205/mapreduce/mapreduce.tar.gz is identical to /usr/hdp/188.8.131.52-205/hadoop/mapreduce.tar.gz, skipping the copying 2017-11-06 16:16:56,458 - Will attempt to copy mapreduce tarball from /usr/hdp/184.108.40.206-205/hadoop/mapreduce.tar.gz to DFS at /hdp/apps/220.127.116.11-205/mapreduce/mapreduce.tar.gz. 2017-11-06 16:16:56,458 - Called copy_to_hdfs tarball: tez 2017-11-06 16:18:36,736 - DFS file /hdp/apps/18.104.22.168-205/tez/tez.tar.gz is identical to /usr/hdp/22.214.171.124-205/tez/lib/tez.tar.gz, skipping the copying 2017-11-06 16:18:36,737 - Will attempt to copy tez tarball from /usr/hdp/126.96.36.199-205/tez/lib/tez.tar.gz to DFS at /hdp/apps/188.8.131.52-205/tez/tez.tar.gz. 2017-11-06 16:18:36,737 - Called copy_to_hdfs tarball: pig 2017-11-06 16:20:16,902 - DFS file /hdp/apps/184.108.40.206-205/pig/pig.tar.gz is identical to /usr/hdp/220.127.116.11-205/pig/pig.tar.gz, skipping the copying 2017-11-06 16:20:16,903 - Will attempt to copy pig tarball from /usr/hdp/18.104.22.168-205/pig/pig.tar.gz to DFS at /hdp/apps/22.214.171.124-205/pig/pig.tar.gz. 2017-11-06 16:20:16,903 - Called copy_to_hdfs tarball: hive 2017-11-06 16:21:57,226 - DFS file /hdp/apps/126.96.36.199-205/hive/hive.tar.gz is identical to /usr/hdp/188.8.131.52-205/hive/hive.tar.gz, skipping the copying 2017-11-06 16:21:57,227 - Will attempt to copy hive tarball from /usr/hdp/184.108.40.206-205/hive/hive.tar.gz to DFS at /hdp/apps/220.127.116.11-205/hive/hive.tar.gz. 2017-11-06 16:21:57,227 - Called copy_to_hdfs tarball: sqoop 2017-11-06 16:23:37,471 - DFS file /hdp/apps/18.104.22.168-205/sqoop/sqoop.tar.gz is identical to /usr/hdp/22.214.171.124-205/sqoop/sqoop.tar.gz, skipping the copying 2017-11-06 16:23:37,471 - Will attempt to copy sqoop tarball from /usr/hdp/126.96.36.199-205/sqoop/sqoop.tar.gz to DFS at /hdp/apps/188.8.131.52-205/sqoop/sqoop.tar.gz. 2017-11-06 16:23:37,471 - Called copy_to_hdfs tarball: hadoop_streaming 2017-11-06 16:25:17,696 - DFS file /hdp/apps/184.108.40.206-205/mapreduce/hadoop-streaming.jar is identical to /usr/hdp/220.127.116.11-205/hadoop-mapreduce/hadoop-streaming.jar, skipping the copying 2017-11-06 16:25:17,697 - Will attempt to copy hadoop_streaming tarball from /usr/hdp/18.104.22.168-205/hadoop-mapreduce/hadoop-streaming.jar to DFS at /hdp/apps/22.214.171.124-205/mapreduce/hadoop-streaming.jar.
Python script has been killed due to timeout after waiting 900 secs
Hence after 900 Seconds ambari is timing out the task.
Another approach will be to increase the timeout inside "/etc/ambari-server/conf/ambari.properties" to a higher value like 1800
# grep 900 /etc/ambari-server/conf/ambari.properties agent.task.timeout=900