Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

location of hadoop and pig logs

avatar
Rising Star

My pig script failed when I am trying to do order by. I want to check the logs but I can't see the location of the logs. I got to know from the post that it should be set in the file (/etc/hadoop/hadoop-env.sh) but the file has everything commented as shown below. Does that mean it is not being written? how can I set those values to look into the logs as my scripts is failing again and again and I want to check the logs to troubleshoot it. also my file ( hadoop-env.sh)seems to be at different path(/etc/hadoop/conf.install) is there anything wrong ??

# On secure datanodes, user to run the datanode as after dropping privileges.

# This **MUST** be uncommented to enable secure HDFS if using privileged ports # to provide authentication of data transfer protocol. This **MUST NOT** be

# defined if SASL is configured for authentication of data transfer protocol # using non-privileged ports.

#export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} # Where log files are stored. $HADOOP_HOME/logs by default. #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

1 ACCEPTED SOLUTION

avatar

Hello Amit, As you mentioned that you "need to analyse map reduce logs in order to resolve the issue."

So did you check the following dir to find the mapreduce logs?

/var/log/hadoop-mapreduce/mapred/ (for History logs)

/var/log/hadoop-yarn/yarn (for hadoop-mapreduce.jobsummary.log)

View solution in original post

10 REPLIES 10

avatar
Rising Star

@Amit Sharma

You can review the failed job log from ResourceManager UI.

Ambari > YARN > Quick Links > ResourceManager UI