Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here. Want to know more about what has changed? Check out the

Oozie giving core dump

Oozie giving core dump

New Contributor

Please help me understand this core dump while pig is runnning from via oozie?

This job runs fine if submitted via command line but through oozie it gives core dump:

 

2016-11-22 20:35:27,649 [main] INFO  org.apache.hadoop.yarn.client.RMProxy  - Connecting to ResourceManager at rms-name-node/10.194.131.39:8032
2016-11-22 20:35:27,714 [main] INFO  org.apache.pig.tools.pigstats.ScriptState  - Pig script settings are added to the job
2016-11-22 20:35:29,650 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation  - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2016-11-22 20:35:29,650 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2016-11-22 20:35:29,652 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation  - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
2016-11-22 20:35:29,806 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation  - fs.default.name is deprecated. Instead, use fs.defaultFS
2016-11-22 20:35:29,807 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation  - mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2016-11-22 20:35:33,960 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler  - creating jar file Job4907606867917395935.jar
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x00007f71dc0357fb, pid=6058, tid=140127011108608
#
# JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 1.7.0_79-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libc.so.6+0x897fb]  memcpy+0x15b
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/dfs/yarn/nm/usercache/hdfs/appcache/application_1474461332137_1630/container_1474461332137_1630_01_000002/hs_err_pid6058.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

 

1 REPLY 1
Highlighted

Re: Oozie giving core dump

Expert Contributor

Is there anything in the file that the log mentions?

/opt/dfs/yarn/nm/usercache/hdfs/appcache/application_1474461332137_1630/container_1474461332137_1630_01_000002/hs_err_pid6058.log

 
The message itself suggests that the JVM killed itself due to bad memory frame. Is there anything in your /var/log/messages or in dmesg that might provide further insight?