Member since
04-01-2016
13
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1709 | 12-27-2016 12:18 AM |
12-27-2016
12:18 AM
Solution found, the config chages was marked as "Stale Configuration" https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_mc_client_config.html
... View more
12-25-2016
02:50 PM
I've moved from quickstart vm 5.3 to 5.8 and found that yarn config in /etc/hadoop ignored by system. I can change some settings via Cloudera Manager, but looks like in version 5.8 the settings is not stored in /etc/hadoop I've got Java heap space error in my yarn jobs, the error very close to the issue solved before on 5.3 http://community.cloudera.com/t5/Hadoop-101-Training-Quickstart/Map-and-Reduce-Error-Java-heap-space/m-p/45874 but now I do not see any influence on system after changes in /etc/hadoop configs. export HADOOP_OPTS="-Xmx5096m" also is not working for me.
... View more
Labels:
- Labels:
-
Apache YARN
-
Quickstart VM
10-06-2016
11:05 PM
Thanks ! mapred.child.java.opts in mapred-site.xml solved the issue
... View more
10-06-2016
04:59 AM
hadoop-cmf-yarn-NODEMANAGER-quickstart.cloudera.log.out: 2016-10-03 12:22:14,533 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 18309 for container-id container_1475517800829_0009_01_000005: 130.2 MB of 3 GB physical memory used; 859.9 MB of 6.3 GB virtual memory used
2016-10-03 12:22:28,045 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 16676 for container-id container_1475517800829_0009_01_000001: 178.8 MB of 1 GB physical memory used; 931.1 MB of 2.1 GB virtual memory used
2016-10-03 12:22:31,303 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 18309 for container-id container_1475517800829_0009_01_000005: 128.8 MB of 3 GB physical memory used; 859.9 MB of 6.3 GB virtual memory used
2016-10-03 12:22:46,965 WARN org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Error reading the stream java.io.IOException: No such process
2016-10-03 12:22:46,966 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 16676 for container-id container_1475517800829_0009_01_000001: 179.0 MB of 1 GB physical memory used; 931.1 MB of 2.1 GB virtual memory used
2016-10-03 12:22:47,122 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1475517800829_0009_01_000005
... View more
10-03-2016
12:24 PM
I'm using QuickStart VM with CHD5.3, trying to run modified sample from MR-parquet read. It is worked OK on 10M rows parquet table, but I've got "Java heap space" error on table having 40M rows:
[cloudera@quickstart sep]$ yarn jar testmr-1.0-SNAPSHOT.jar TestReadParquet /user/hive/warehouse/parquet_table out_file18 -Dmapreduce.reduce.memory.mb=5120 -Dmapreduce.reduce.java.opts=-Xmx4608m -Dmapreduce.map.memory.mb=5120 -Dmapreduce.map.java.opts=-Xmx4608m 16/10/03 12:19:30 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032 16/10/03 12:19:31 INFO input.FileInputFormat: Total input paths to process : 1 Oct 03, 2016 12:19:31 PM parquet.Log info INFO: Total input paths to process : 1 Oct 03, 2016 12:19:31 PM parquet.Log info INFO: Initiating action with parallelism: 5 Oct 03, 2016 12:19:31 PM parquet.Log info INFO: reading another 1 footers Oct 03, 2016 12:19:31 PM parquet.Log info INFO: Initiating action with parallelism: 5 SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. 16/10/03 12:19:31 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 16/10/03 12:19:31 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize Oct 03, 2016 12:19:31 PM parquet.Log info INFO: There were no row groups that could be dropped due to filter predicates 16/10/03 12:19:32 INFO mapreduce.JobSubmitter: number of splits:1 16/10/03 12:19:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1475517800829_0009 16/10/03 12:19:33 INFO impl.YarnClientImpl: Submitted application application_1475517800829_0009 16/10/03 12:19:33 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1475517800829_0009/ 16/10/03 12:19:33 INFO mapreduce.Job: Running job: job_1475517800829_0009 16/10/03 12:19:47 INFO mapreduce.Job: Job job_1475517800829_0009 running in uber mode : false 16/10/03 12:19:47 INFO mapreduce.Job: map 0% reduce 0% 16/10/03 12:20:57 INFO mapreduce.Job: map 100% reduce 0% 16/10/03 12:20:57 INFO mapreduce.Job: Task Id : attempt_1475517800829_0009_m_000000_0, Status : FAILED Error: Java heap space Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
Also I've tryed to edit /etc/hadoop/conf/mapred-site.xml, tryed via cloudera manager GUI (clusters->hdfs-> ... Java Heap Size of DataNode in Bytes )
[cloudera@quickstart sep]$ free -m total used free shared buffers cached Mem: 13598 13150 447 0 23 206 -/+ buffers/cache: 12920 677 Swap: 6015 2187 3828
Mapper class:
public static class MyMap extends Mapper<LongWritable, Group, NullWritable, Text> { @Override public void map(LongWritable key, Group value, Context context) throws IOException, InterruptedException { NullWritable outKey = NullWritable.get(); String outputRecord = ""; // Get the schema and field values of the record // String inputRecord = value.toString(); // Process the value, create an output record // ... int field1 = value.getInteger("x", 0); if (field1 < 3) { context.write(outKey, new Text(outputRecord)); } } }
... View more
Labels:
- Labels:
-
Cloudera Manager
-
MapReduce
-
Quickstart VM