Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2092 | 06-15-2020 05:23 AM | |
| 17432 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8711 | 01-27-2018 10:17 PM | |
| 4912 | 12-31-2017 10:12 PM |
09-04-2018
08:27 PM
the replication factor is 3 , and we use all 4 disks for HDFS ( means 80G ) , regarding to what you said - how much hdfs space created , please advice how to check it?
... View more
09-04-2018
08:24 PM
We run Spark App in Hadoop cluster ( HDP version – 2.6.4 , we have spark version 2.2 ) With the following details ExecutorInstances=6 Executor memory =35G
spark2 daemon memory =50G from the yarn logs we can see many of these warning
WARN executor.Executor: Issue communicating with driver in heartbeater What this messages means – “executor.Executor: Issue communicating with driver in heartbeater” And any ideas if needed additional spark configuration to solve this issue ?
... View more
Labels:
09-04-2018
07:33 PM
I cant tell you exaclty but after I tar again the files , this solve my problem
... View more
09-04-2018
05:33 PM
hi all, we installed new hadoop cluster ( ambari + HDP version 2.6.4 ) after installation , we notice that we have problem with the spark-submit and finally we found that spark2-hdp-yarn-archive.tar.gz file is corruption full path - /hdp/apps/2.6.4.0-91/spark2/spark2-hdp-yarn-archive.tar.gz ( from HDFS ) my question is - what could be the reason that this is is corrupted ? in spite this cluster is new fresh installation
... View more
Labels:
09-04-2018
05:26 PM
@Ganesh we have 4 datanode machines and each datanode have 4 disks , and each disk have 20G , let me know if this info that you want ?
... View more
09-04-2018
12:17 PM
@Geoffrey So the last thing that we can to do is to add disk on the datanode , or add data-node machine , what your opinion ?
... View more
09-04-2018
11:06 AM
@Geoffrey , yes we already did hdfs dfs -expunge and still HDFS is around 88%
... View more
09-04-2018
10:32 AM
@Rabbit , so in our case based on that 88% is HDFS uses , the only one option to be with more HDFS space is to add adisk in each datanode ? am I right ?
... View more
09-04-2018
10:16 AM
@jay , yes I agree , but how it can be the datanode disk are with capacity of 50% and HDFS show 88% ? , and why we not used all the size of datanode disks ? , I am really not understant that
... View more
09-04-2018
09:36 AM
@Jay , according to http://$ACTIVE_NAMENODE:50070/dfshealth.html#tab-overview we get
... View more