Member since
11-16-2016
8
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
46301 | 12-01-2016 11:18 AM | |
15682 | 12-01-2016 11:17 AM |
12-01-2016
11:18 AM
Hi All, Fixing the following issue fixed also this one: https://community.hortonworks.com/questions/68989/datanodes-status-not-consistent.html#answer-69461 Regards Alessandro
... View more
12-01-2016
11:17 AM
Hi All, i solved the issue adding the following configuration in the "Custom hdfs-site" section. <property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property> I modified also the following in the "Advanced hdfs-site" section: from nameMyServer:8020 to ipMyServer:8020 Regards Alessandro
... View more
11-30-2016
09:57 AM
Hi Vedant, Thanks for the reply.
I already executed the two above commands and YARN is listing 0 application.
Then i executed again the job and i am facing two different situations: 1 The job hangs on: INFO Client: Application report for application_1480498999425_0002 (state: ACCEPTED) 2 The job starts (RUNNING STATUS) but when executing the first spark jobs it stops with the following error: YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources Thanks
Alessandro
... View more
11-29-2016
03:42 PM
Hi All, i am not able to submit a Spark job. The error is:
YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources I submit the application with the following command:
spark-submit --class util.Main --master yarn-client --executor-memory 512m --executor-cores 2 my.jar my_config I installed Apache Ambari Version2.4.1.0 and ResourceManager version:2.7.3.2.5.0.0 on Ubuntu 14.04. Which should be the cause of the issue? Thanks
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
-
Cloudera Manager
11-29-2016
03:16 PM
Hi All,
I have a cluster with 4 machines. In each machine has been installed a DataNode.
I reported below the screenshots showing the Ambari status.
It is correctly showing 4/4 datanodes however only 1 seems live.
My questions are:
*) why is not showing 4 lives datanodes?
*) is this affecting also that block are not replicated (see "under replicated blocks")
*) also, when running a spark job i get: YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_e07_1480428595380_0003_02_000003 on host: slave01.hortonworks.com. Exit status: -1000. Diagnostics: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1459687468-127.0.1.1-1479480481481:blk_1073741831_1007 file=/hdp/apps/2.5.0.0-1245/spark/spark-hdp-assembly.jar
Could you please help me on the above issues? Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop