Member since
06-13-2017
7
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1557 | 10-23-2019 05:19 AM |
11-12-2019
11:45 PM
I agree with @sagarshimpi , please provide the full log trace from the datanode and see if you are able to find any disk level exception in the logs are not. Also you can open NN UI-->"Datanode volume failures" tab and to confirm this. Leaving safe mode manually will not solve the problem as NN did not receive any block reports from these data nodes, if you will leave the safe mode manually then you will get missing blocks alerts for all the blocks you have (approx 34 million blocks). Did you change any configuration at hdfs level before restarting the whole HDFS service? If yes, then please share the property name. Perform a disk i/o check on all the 4 data nodes using iostat command and check if disks are working fine without any heavy %util.
... View more
10-23-2019
09:58 PM
@sunnypathuri Basically handler counts are the number of threads which is responsible to process the request for hadoop RPC server. Best practice is to set the RPC handler count property dfs.namenode.handler.count to 20*log2(number of datanodes) with an upper limit of 200. If you have 12 datanode then number of threads should be - 20*log2(12) = 80 (dfs.namenode.handler.count = 80) max you can increase it to 100. If you have specified 140 then all the handlers will be active to process the request. You can verify it by taking a thread dump of namenode process and check the total number of handlers and what they are doing. You can take a thread dump by running $ kill -3 <namenode_pid> This will take a thread dump of Namenode process in .out file under /var/log/hadoop/hdfs (Namenode log directory.) Another method is to check the Grafana Dashboard (If you are using Ambari ) and check the RPC queue metrics for Namenode.
... View more
10-23-2019
05:19 AM
2 Kudos
@ShaatH Yes, it is mandatory to disable yarn ats hbase as described in the document. It will only stop and destroys the saved config of ats-hbase but your actual data will be present. We are destroying the ats-hbase because of the old non-HA or non-secure configuration, destroy will remove the old configuration so that after enabling the HA, ats-hbase can pick up the right and latest configs.
... View more
06-29-2017
02:00 PM
I am running below Spark command
Spark Command:
spark-submit --master yarn --deploy-mode cluster --class com.hpe.eap.batch.EAPDataRefinerMain --num-executors 2 --executor-cores 1 --executor-memory 1g --driver-memory 2g --jars application.json,/usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar, eap-spark-refiner-1.0.jar --files /etc/spark/conf/hive-site.xml I am getting error as below.
ERROR ApplicationMaster: User class threw exception: java.lang.LinkageError: ClassCastException: attempting to castjar:file:/data19/hadoop/yarn/local/filecache/79/spark-hdp-assembly.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/data19/hadoop/yarn/local/filecache/79/spark-hdp-assembly.jar!/javax/ws/rs/ext/RuntimeDelegate.class java.lang.LinkageError: ClassCastException: attempting to castjar:file:/data19/hadoop/yarn/local/filecache/79/spark-hdp-assembly.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/data19/hadoop/yarn/local/filecache/79/spark-hdp-assembly.jar!/javax/ws/rs/ext/RuntimeDelegate.class
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark