Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2092 | 06-15-2020 05:23 AM | |
| 17432 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8711 | 01-27-2018 10:17 PM | |
| 4912 | 12-31-2017 10:12 PM |
09-06-2018
10:01 AM
we need the debug for Spark Thrift server , we have issue when heartbeat from datanode machine not communicated with the driver , so this is the reason that we need debug mode on for Spark Thrift server
... View more
09-06-2018
07:23 AM
because the logs are huge , do you want to search specific sting in the logs ?
... View more
09-06-2018
07:06 AM
about the logs , please remind me what the logs that you want to look for ?
... View more
09-06-2018
07:04 AM
this is the relevant info from the file ( long file , but I think you want to look on the relevant disks )
<value>
/data/sdb/hadoop/hdfs/data,/data/sdc/hadoop/hdfs/data,/data/sdd/hadoop/hdfs/data,/data/sde/hadoop/hdfs/data
</value>
<source>hdfs-site.xml</source>
</property>
<value>
/data/sdb/hadoop/hdfs/data,/data/sdc/hadoop/hdfs/data,/data/sdd/hadoop/hdfs/data,/data/sde/hadoop/hdfs/data
</value>
<source>hdfs-site.xml</source>
</property>
... View more
09-05-2018
10:13 PM
Executor is a distributed agent that is responsible for executing tasks. this is very clear but how to know if there are any issues with the executors that runs from datanode machine? I asking this question because when I am looking on the datanode machine I not see any logs that represented the executors , and I not understand how to trace problems about the exectores the second important quastion: heartbeat are sent from the executor to the driver what are the logs that represented this heartbeat ? how to know if there are any issue with heartbeat sending ?
... View more
Labels:
09-05-2018
09:15 PM
we have HDP cluster version 2.6.4 , ambari version 2.6.1 with 8 workers machines ( datanode machines ) on each worker machines we have the folder /var/log/spark2 but no any logs under this folder on the master machines - when the spark thrift running we have the /var/log/spark2 and logs are created corectly on this machine but not on the datanode machine spark thrift restasrt twice . but this not help to create the logs on the datanode machine any other ideas what we can do ?
... View more
Labels:
09-05-2018
09:10 PM
let me know if you have some conclusions , as you saw the configuration in HDFS and in the XML is correct , so I show you the real status and disk are corectly configured in HDFS ,,
... View more
09-05-2018
09:03 PM
this is the file: , and its look fine <name>dfs.datanode.data.dir</name>
<value>/data/sdb/hadoop/hdfs/data,/data/sdc/hadoop/hdfs/data,/data/sdd/hadoop/hdfs/data,/data/sde/hadoop/hdfs/data</value>
--
<name>dfs.datanode.data.dir.perm</name>
<value>750</value>
... View more
09-05-2018
09:01 PM
hi per your request , this is the file <name>dfs.datanode.data.dir</name>
<value>/data/sdb/hadoop/hdfs/data,/data/sdc/hadoop/hdfs/data,/data/sdd/hadoop/hdfs/data,/data/sde/hadoop/hdfs/data</value>
--
<name>dfs.datanode.data.dir.perm</name>
<value>750</value>
... View more
09-05-2018
06:38 PM
so what is the final conclution , why we have a gap between what the disks size and the HDFS as displayed on the amabri dasborad ?
... View more