Member since
11-07-2017
3
Posts
1
Kudos Received
0
Solutions
01-03-2018
12:26 PM
1 Kudo
Using HDP2.5. Memory problems when adding extra Flume agent to node. At least when start an extra flume agent is added (currently 10 works fine), another one will stop. Does anyone has an idea which setting to change? flume agent log.. # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create GC thread. Out of system resources. # An error report file with more information is saved as: # /home/flume/hs_err_pid67793.log hs_err log # Out of Memory Error (gcTaskThread.cpp:48), pid=67793, tid=0x00007fe6344d1700
#
# JRE version: (8.0_112-b15) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b15 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
Current thread (0x00007fe62c030000): JavaThread "Unknown thread" [_thread_in_vm, id=69273, stack(0x00007fe6
343d1000,0x00007fe6344d2000)]
Stack: [0x00007fe6343d1000,0x00007fe6344d2000], sp=0x00007fe6344d0550, free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0xac6f7a] VMError::report_and_die()+0x2ba
V [libjvm.so+0x4fc71b] report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8b
V [libjvm.so+0x5d78ff] GCTaskThread::GCTaskThread(GCTaskManager*, unsigned int, unsigned int)+0x15f
V [libjvm.so+0x5d66bb] GCTaskManager::initialize()+0x3ab
V [libjvm.so+0x9472fd] ParallelScavengeHeap::initialize()+0x34d
V [libjvm.so+0xa8f0a3] Universe::initialize_heap()+0xf3
V [libjvm.so+0xa8f60e] universe_init()+0x3e
V [libjvm.so+0x63d385] init_globals()+0x65
V [libjvm.so+0xa72cfe] Threads::create_vm(JavaVMInitArgs*, bool*)+0x23e
V [libjvm.so+0x6d1c24] JNI_CreateJavaVM+0x74
C [libjli.so+0x745e] JavaMain+0x9e
C [libpthread.so.0+0x7aa1] start_thread+0xd1
VM Arguments:
jvm_args: -Xmx20m -Dflume.monitoring.type=org.apache.hadoop.metrics2.sink.flume.FlumeTimelineMetricsSink -Dflume.monitoring.node=xxxxxx.is.xxxxxx.net:6188 -Djava.library.path=:#
java_command: # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create GC thread. Out of system resources. # An error report file with more information is saved as: # /home/flume/hs_err_pid67800.log:# # There is insufficient memory for the Java Runtime Environment to continue. # Cannot create worker GC thread. Out of system resources. # An error report file with more information is saved as: # /home/flume/hs_err_pid68346.log org.apache.flume.node.Application --name slspax-agent --conf-file /usr/hdp/current/flume-server/conf/slspax-agent/flume.conf
java_class_path (initial): /usr/hdp/current/flume-server/conf/slspax-agent:/usr/hdp/2.5.0.0-1245/flume/lib/parquet-common-1.2.5.jar:/usr/hdp/2.5.0.0-1245/flume/lib/jopt-simple-4.9.jar:/usr/hdp/2.5.0.0-1245/flume/lib/log4j-1.2.17.jar:/usr/hdp/2.5.0.0-1245/flume/lib/parquet-generator-1.2.5.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-hive-sink-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-ng-kafka-sink-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/irclib-1.10.jar:/usr/hdp/2.5.0.0-1245/flume/lib/kafka_2.10-0.10.0.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-ng-hbase-sink-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/commons-dbcp-1.4.jar:/usr/hdp/2.5.0.0-1245/flume/lib/apache-log4j-extras-1.1.jar:/usr/hdp/2.5.0.0-1245/flume/lib/avro-1.7.3.jar:/usr/hdp/2.5.0.0-1245/flume/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-dataset-sink-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/mapdb-0.9.9.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-kafka-channel-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-irc-sink-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/commons-io-2.4.jar:/usr/hdp/2.5.0.0-1245/flume/lib/commons-cli-1.2.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-jdbc-channel-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-tools-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-avro-source-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/async-1.4.0.jar:/usr/hdp/2.5.0.0-1245/flume/lib/paranamer-2.3.jar:/usr/hdp/2.5.0.0-1245/flume/lib/zkclient-0.8.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-ng-embedded-agent-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/flume-scribe-source-1.5.2.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/flume/lib/mina-core-2.0.4.jar:/usr/hdp/2.5.0.0-1245/flume/lib/xalan-2.7.1.jar:/usr/hdp/2.5.0.0-1245/flume/lib/commons-codec-1.8.jar:/usr/hdp/2.5.0.0-1245/flume/lib/servlet-api-2.5-20110124.jar:/usr/hdp/2.5.0.0-1245/flume/lib/hadoop-auth.jar:/usr/hdp
Launcher Type: SUN_STANDARD
Environment Variables:
JAVA_HOME=/tech/java/oracle/1.8
CLASSPATH=/etc/hadoop/conf
PATH=/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent
LD_LIBRARY_PATH=::#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/flume/hs_err_pid67800.log:#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create worker GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/flume/hs_err_pid68346.log
SHELL=/bin/bash
Signal Handlers:
SIGSEGV: [libjvm.so+0xac7800], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGBUS: [libjvm.so+0xac7800], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGFPE: [libjvm.so+0x920990], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGPIPE: [libjvm.so+0x920990], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGXFSZ: [libjvm.so+0x920990], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGILL: [libjvm.so+0x920990], sa_mask[0]=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGUSR1: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGUSR2: [libjvm.so+0x9221d0], sa_mask[0]=00000000000000000000000000000000, sa_flags=SA_RESTART|SA_SIGINFO
SIGHUP: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGINT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGTERM: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGQUIT: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
--------------- S Y S T E M ---------------
OS:Red Hat Enterprise Linux Server release 6.6 (Santiago)
uname:Linux 2.6.32-504.23.4.el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 x86_64
libc:glibc 2.12 NPTL 2.12
rlimit: STACK 10240k, CORE 0k, NPROC 1024, NOFILE 8192, AS infinity
load average:0.08 0.25 3.42
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
Other Threads:
=>0x00007fe62c030000 (exited) JavaThread "Unknown thread" [_thread_in_vm, id=69273, stack(0x00007fe6343d1000,0x00007fe6344d2000)]
VM state:not at safepoint (not fully initialized)
VM Mutex/Monitor currently owned by a thread: None
GC Heap History (0 events):
No events
Deoptimization events (0 events):
No events
Internal exceptions (0 events):
No events
Events (0 events):
No events
Dynamic libraries:
00400000-00401000 r-xp 00000000 fd:2b 26 /tech/java/oracle/jdk-1.8.0_112/bin/java
00600000-00601000 rw-p 00000000 fd:2b 26 /tech/java/orac
..
Garbage Collector (GC) log: Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 264369912k(85463588k free), swap 8290300k(8290300k free)
CommandLine flags: -XX:InitialHeapSize=104857600 -XX:MaxHeapSize=4194304000 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
Heap
PSYoungGen total 29696K, used 16697K [0x000000076cb00000, 0x000000076ec00000, 0x00000007c0000000)
eden space 25600K, 65% used [0x000000076cb00000,0x000000076db4e408,0x000000076e400000)
from space 4096K, 0% used [0x000000076e800000,0x000000076e800000,0x000000076ec00000)
to space 4096K, 0% used [0x000000076e400000,0x000000076e400000,0x000000076e800000)
ParOldGen total 68608K, used 0K [0x00000006c6000000, 0x00000006ca300000, 0x000000076cb00000)
object space 68608K, 0% used [0x00000006c6000000,0x00000006c6000000,0x00000006ca300000)
Metaspace used 3005K, capacity 4480K, committed 4480K, reserved 1056768K
class space used 311K, capacity 384K, committed 384K, reserved 1048576K
... View more
Labels:
11-08-2017
01:54 PM
@Chris Cotter Thank you for our answer. I can confirm the RconGUI's version of JSON Serde works to circumvent the issue.
... View more
11-07-2017
04:02 PM
Hi all, This is my first post here. I have an issue trying to map a JSON field starting with a hash (#) to a Hive column using the (simplified) statement below : CREATE EXTERNAL TABLE foobar_table (
foobar STRING
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES ( 'mapping.foobar' = '#foobar' )
LOCATION '/data/foobar';
Using the following JSON file below: {"#foobar":"some_value"}
There are no issues when the special character is removed, but this is no solution in the real-world scenario. Is the fact the mapping isn't working a bug in the code, or am I using the wrong (version of) JsonSerDe. I'm using HDP-2.5.0.0 Hopefully someone can supply with an answer other then renaming the JSON field. Thanks in advance..
... View more
Labels: