Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2459 | 04-27-2020 03:48 AM | |
4896 | 04-26-2020 06:18 PM | |
3984 | 04-26-2020 06:05 PM | |
3227 | 04-13-2020 08:53 PM | |
4939 | 03-31-2020 02:10 AM |
01-02-2019
08:16 AM
1 Kudo
@Michael Bronson Can you please share the complete AMS collector logs .. along with the HMaster log which you can find in the same directory where the AMS collector log is present. Do you have enough free memory available on your system (where AMS is running) Also can you please share the cluster node numbers present in your cluster? based on that we can try to find if the Heap settings for AMS collector and HMaster are OK or not? Also let us know if your AMS collector is running in Embedded Mode or Distributed Mode? Looking at collector-gc.log and gc.log (fopr hmaster process can be also good). What is the Ambari Version and the output of the following command (Any recent ambari upgrade performed recently?) When was the AMS running fine last time? Any recent changes made to ams configs? # rpm -qa | grep ambari .
... View more
01-01-2019
09:01 PM
@Rajesh Sampath The problem seems to be due to "NoClassDefFoundError: Could not initialize class org.apache.tez.runtime.library.api.TezRuntimeConfiguration" as following: at java.lang.Thread.run(Thread.java:745) , errorMessage=Cannot recover from this error:java.lang.NoClassDefFoundError: Could not initialize class org.apache.tez.runtime.library.api.TezRuntimeConfiguration
at org.apache.tez.runtime.library.common.writers.UnorderedPartitionedKVWriter.<init>(UnorderedPartitionedKVWriter.java:264) Can you please check if the following kind of JAR exist and readable? Example: (Your HDP3 version might be slightly different so the jar name/version might change slightly) # ls -l /usr/hdp/3.0.0.0-1634/tez/tez-runtime-library-0.9.1.3.0.0.0-1634.jar
-rw-r--r--. 1 root root 768636 Jul 12 20:40 /usr/hdp/3.0.0.0-1634/tez/tez-runtime-library-0.9.1.3.0.0.0-1634.jar Can you also check if the hive usercache cleanup fixes the issue? What is the umask setup for your cluster "022" ?
... View more
01-01-2019
12:41 AM
@Weiss Ruth Look like there is another duplicate thread opened for the same query. Please close one of them. https://community.hortonworks.com/questions/232188/unable-to-login-with-root-user-using-hadoop-passwo.html?childToView=232213#answer-232213
... View more
01-01-2019
12:40 AM
@Weiss Ruth This HCC thread looks duplicate of another one. As per HCC recommendations for the same issue please open only one thread so that all the relevant answers can be found on one thread. https://community.hortonworks.com/questions/232181/sign-in-with-root-user.html?childToView=232212#answer-232212 . Copying my response from other thread: Please make sure that you are using the correct SSH port to connect
to the Sandbox. The port will be 2222 something like following: # ssh root@127.0.0.1 -p 2222
Enter password : hadoop . Or you can also use the Web Client to do SSH login to sandbox like by accessing the following URL. http://localhost:4200
... View more
01-01-2019
12:38 AM
@Weiss Ruth Please make sure that you are using the correct SSH port to connect to the Sandbox. The port will be 2222 something like following: # ssh root@127.0.0.1 -p 2222
Enter password : hadoop . Or you can also use the Web Client to do SSH login to sandbox like by accessing the following URL. http://localhost:4200
... View more
12-31-2018
05:58 AM
@Nihal Shelke As you mentioned that you are trying to use Apache HBase 2.1.1 which is not a tested and certified version with HDP stack yet. Even the latest HDP 3.1 is certified and tested with HBase 2.0.2 as per the release notes: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/comp_versions.html So it will be best if you stick to the tested and certified version.
... View more
12-31-2018
01:06 AM
1 Kudo
@PJ Last 7 Days of Memory For Yarn Queues : 1. Allocated Memory of "default" queue: http://$AMS_COLLECTOR_HOSTNAME:6188/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.default.AllocatedMB._max&appId=resourcemanager&startTime=1545612799&endTime=1546217599 2. Reserved Memory of "default" queue: http://$AMS_COLLECTOR_HOSTNAME:6188/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.default.default.ReservedMB._max&appId=resourcemanager&startTime=1545612890&endTime=1546217690 3. Pending Memory of "default" queue: http://$AMS_COLLECTOR_HOSTNAME:6188/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.default.AllocatedMB._max&appId=resourcemanager&startTime=1545612940&endTime=1546217740 4. Available Memory of "default" queue http://$AMS_COLLECTOR_HOSTNAME:6188/ws/v1/timeline/metrics?metricNames=yarn.QueueMetrics.Queue=root.default.AvailableMB._max&appId=resourcemanager&startTime=1545613074&endTime=1546217874 Please replace the following in the above queries: 1. "AMS_COLLECTOR_HOSTNAME:6188" with your Ambari Metrics Collector Hostname and port (default port is 6188) 2. Name of the queue "default" with the name of your own queue which you want to monitor. 3. Change the StartTime and endTime in the query to cover last 7 days. You can use some online tool to generate the EPOC time like https://www.epochconverter.com/ Make sure that the generated epooc time is in milliseconds like 10 digits. .
... View more
12-29-2018
12:22 PM
@Rohit Khose Ambari provides Patch upgrade feature for individual component upgrades, However that is possible only when you get a tested and certified VDF from Hortonworks support. NOTE:Before performing a patch upgrade, you must obtain from Hortonworks Customer Support, the specific VDF file associated with the patch release. To know more about Patch Upgrade please refer to: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-upgrade/content/performing_a_patch_upgrade.html Else If you just try to install a community release of Higher version of HBase then it is not going to work that easily as it has many additional dependencies changed.
... View more
12-29-2018
12:06 PM
@Rohit Khose How did you upgrade only HBase of your HDP 3.0.1 installation? By default HBase version 2.0.0 is shipped with HDP 3.0.1 https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/comp_versions.html . For HDP 3.1 provides Apache HBase 2.0.2 https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/release-notes/content/comp_versions.html
... View more
12-29-2018
11:14 AM
@Rohit Khose Can you please check the following: 1. If the following kind of JAR exists and Upgraded properly? /usr/hdp/Sn-3.0.1.0-187/hadoop/client/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hadoop/client/woodstox-core.jar
/usr/hdp/Sn-3.0.1.0-187/hadoop/lib/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hadoop/lib/ranger-hdfs-plugin-impl/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hadoop/lib/ranger-yarn-plugin-impl/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hadoop-hdfs/lib/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hbase/lib/woodstox-core-5.0.3.jar
/usr/hdp/Sn-3.0.1.0-187/hbase/lib/ranger-hbase-plugin-impl/woodstox-core-5.0.3.jar 2. Make sure that you do not have any different version of "woodstox" jar added to your hbase classpath. Search for all woodstox JARs in your system to find out if it belongs to a different version. 3. Also please check that you have not set any HADOOP_HOME, HADOOP_CLASSPATH, HBASE_CLASSPATH kind of variables in your system pointing to a different version of Hadoop/hbase lib binariies.
... View more