Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1315 | 08-15-2017 08:15 AM | |
6155 | 01-24-2017 06:58 AM | |
1614 | 08-03-2016 06:45 AM | |
2914 | 06-01-2016 10:08 PM | |
2502 | 04-07-2016 10:30 AM |
11-24-2015
11:18 AM
12 CPUs, 64GB memory mapreduce.reduce.shuffle.input.buffer.percent=0.7 mapreduce.reduce.memory.mb=2048
... View more
11-24-2015
03:33 AM
Seeing the following error on HDP 2.3.0: 2015-10-19 07:33:03,353 ERROR mapred.ShuffleHandler (ShuffleHandler.java:exceptionCaught(1053)) - Shuffle error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOf(Arrays.java:2219)
at java.util.ArrayList.grow(ArrayList.java:242)
at java.util.ArrayList.ensureExplicitCapacity(ArrayList.java:216)
at java.util.ArrayList.ensureCapacityInternal(ArrayList.java:208)
at java.util.ArrayList.add(ArrayList.java:440)
--
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) 2015-10-21 07:05:13,532 FATAL yarn.YarnUncaughtExceptionHandler (YarnUncaughtExceptionHandler.java:uncaughtException(51)) - Thread Thread[Container Monitor,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.io.BufferedReader.<init>(BufferedReader.java:98)
at java.io.BufferedReader.<init>(BufferedReader.java:109)
at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:545)
at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.updateProcessTree(ProcfsBasedProcessTree.java:225)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:439) Not only try increasing "mapreduce.reduce.memory.mb" but also can I add "-XX:-UseGCOverheadLimit" in "mapreduce.admin.reduce.child.java.opts"? Also would it be a good idea to reduce "mapreduce.reduce.shuffle.input.buffer.percent" ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
11-12-2015
08:55 AM
1 Kudo
The bug you advised me looks like for fixing "[XXX]" part but not "file://" part, i guess. So I assume this bug has been fixed.
... View more
11-12-2015
08:08 AM
After restarting all HDFS components a few times, (2) was resolved (can write now). But I'm not sure if I'm doing right way because all docs say "file:///" and I couldn't use that.
... View more
11-12-2015
08:02 AM
HDP: 2.3.2 and Ambari 2.1.2 Have been trying to follow http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hdfs_admin_tools/content/configuring_archival_storage.html and https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html but having a few issues. 1) Ambari doesn't allow me to type "[SSD]file:///...." 2) By using "[DISK]/hadoop/hdfs/data,[SSD]/hadoop/hdfs/ssd", I can read the data but can't write 3) Command examples in both URL do not work (I'm guessing it shouldn't be "dfsadmin") I'm getting tired of following docs.., so I was wondering if someone could share the note to set up Archival Storage.
... View more
Labels:
- Labels:
-
Apache Hadoop
11-11-2015
09:43 AM
1 Kudo
That's something i'm not sure. If i use hive/FQDN@MY_REALM with kinit, i can use "hdfs dfs -ls" command, then I started beeline, then get this error.
... View more
11-11-2015
09:27 AM
Basically followed the instruction in http://hortonworks.com/blog/enabling-kerberos-hdp-active-directory-integration/ From ambari, everything looks OK but beeline command fails with "GSS initiate failed (state=08S01,code=0)" Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
... 28 more
Checked time is synchronized and resolving IP/hostname is correct. Built another HDP 2.3.2 and Ambari 2.1.2 without AD this time (simple MIT KDC) And Ambari looks OK but, again, beeline fails with same error. How I'm starting beeline is like below: su - hive beeline -u "jdbc:hive2://hiveserver2_fqdn:10000/default;principal=hive/hiveserver2_fqdn@MY_REALM" I think i'm forgetting some setting... I appreciate any advice from you. Thank you
... View more
Labels:
- Labels:
-
Apache Hive
11-04-2015
01:06 AM
i'm using 2.3.2 (but customer is 2.2.4.2)
... View more
11-04-2015
01:00 AM
But how about "jobTracker" in job.properties? I need to type "jobTrackers=hostname:8032"
... View more
11-04-2015
12:53 AM
3 Kudos
I was always wondering what value I should use for "jobtTacker" in my job.properites for Resource Manager HA. Now a customer asked same question, so I thought this might be a good opportunity to find out. Does anyone know which string we should use to utilize YARN Resource Manager HA? According to Google, Cloudera uses "logicaljt" but I don't see this string in HDP code so far.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
-
Apache YARN