Support Questions

Find answers, ask questions, and share your expertise

why spark2 logs are not created in the datanode machines

avatar

we have HDP cluster version 2.6.4 , ambari version 2.6.1

with 8 workers machines ( datanode machines )

on each worker machines we have the folder /var/log/spark2

but no any logs under this folder

on the master machines - when the spark thrift running we have the /var/log/spark2 and logs are created corectly on this machine

but not on the datanode machine

spark thrift restasrt twice . but this not help to create the logs on the datanode machine

any other ideas what we can do ?

Michael-Bronson
1 ACCEPTED SOLUTION

avatar
@Michael Bronson

Spark will not log anything in Datanode machines(where executors/containers are running) at /var/log/spark2. Spark app is like any other yarn application. When the application is running the logs will be stored in the container home directory and then it will be moved to hdfs post log aggregation(which can be extracted by yarn logs command).

Hope this helps.

View solution in original post

3 REPLIES 3

avatar
@Michael Bronson

Spark will not log anything in Datanode machines(where executors/containers are running) at /var/log/spark2. Spark app is like any other yarn application. When the application is running the logs will be stored in the container home directory and then it will be moved to hdfs post log aggregation(which can be extracted by yarn logs command).

Hope this helps.

avatar

@Sandeep , I will explain why we want logs under the datanode machine /var/log/spark2 , maybe you have suggestion

We notice that the messages that sent by the executor ( on datanode ) can not be delivered to the driver ( on the master machine ), and from the yarn logs we can see that warning

<code>WARN executor.Executor:Issue communicating with driver in heartbeater

My question is - what could be the reasons that driver ( master1 machine ) not get the heartbeat from the workers machines

second how to debug this problem about - communicating with driver in heartbeater ?

Michael-Bronson

avatar

@Michael Bronson

By default Spark2 has log level as WARN. Set it to INFO to get more context on what is going on in the driver and executor. More over the log will be locally available in Nodemanager when the container is still running. The easiest way is to go to spark UI (yarn application master UI) -> click on executors tab -> Here you should see stderr and stdout corresponding to driver and executors. Regarding the WARN on heartbeat , we'd need to check what driver is doing at that point. I think you already have asked another question with more details on driver and executor.