Member since
01-16-2014
336
Posts
43
Kudos Received
31
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3402 | 12-20-2017 08:26 PM | |
3378 | 03-09-2017 03:47 PM | |
2843 | 11-18-2016 09:00 AM | |
5027 | 05-18-2016 08:29 PM | |
3858 | 02-29-2016 01:14 AM |
02-06-2020
04:08 AM
Hi, You also need to check on below configuration (If any). 1. Dynamic Resource Pool Configuration > Resource Pools - Check if jobs are exceeding any max values respective of the queue it's being submitted. 2. Dynamic Resource Pool Configuration > User Limits - Check if the maximum number of applications a user can submit simultaneously is crossing the default value (5) or the specified value.
... View more
03-04-2019
06:53 PM
vmem checks have been disabled in CDH almost since their introduction. The vmem check is not stable and highly dependent on Linux version and distro. If you run CDH you are already running with it disabled. Wilfred
... View more
02-20-2019
06:26 AM
When the first attempt fails, it tries to run again the app. So the status changes from "running" to "accepted". If you check the RM webUI you could see several attempts were run.
... View more
10-16-2018
09:49 AM
Hi Guys, I am facing similar issue. I have a new installation of Cloudera and i am trying to run a simple Map reduce Pi Example and also a spark Job. Map Reduce job gets stuck at the map 0% and reduce 0% step as shown below and Spark job is waiting spends lot of time in ACCEPTED state. I checked the user limit and it is blank for me. test@spark-1 ~]$ sudo -u hdfs hadoop jar /data/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/10/16 12:33:25 INFO input.FileInputFormat: Total input paths to process : 10
18/10/16 12:33:26 INFO mapreduce.JobSubmitter: number of splits:10
18/10/16 12:33:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1539705370715_0002
18/10/16 12:33:26 INFO impl.YarnClientImpl: Submitted application application_1539705370715_0002
18/10/16 12:33:26 INFO mapreduce.Job: The url to track the job: http://spark-4:8088/proxy/application_1539705370715_0002/
18/10/16 12:33:26 INFO mapreduce.Job: Running job: job_1539705370715_0002
18/10/16 12:33:31 INFO mapreduce.Job: Job job_1539705370715_0002 running in uber mode : false
18/10/16 12:33:31 INFO mapreduce.Job: map 0% reduce 0% I made multiple config changes, but cannot find a solution for this. The only error i could trace was in the nodemanager log file as below : ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: SIGTERM I tried checking various properties discussed in this thread, but i still have that issue. Can someone please help in solving this issue? Please let me know what all details i can provide.
... View more
09-19-2018
09:16 PM
I understand this is older post but I am getting same problem. Can you please provide solution if it is resolved for you? Thanks
... View more
09-13-2018
10:27 AM
Please follow the below steps. Options for container size control Now comes the complicated part - there are various overlapping and very poorly documented options for setting the size of Tez containers. According to some links, the following options control how Tez jobs started by Hive behave: hive.tez.container.size – value in megabytes hive.tez.java.opts
... View more
07-31-2018
07:09 AM
@Harsh J No, we rarely run balancer in this environment. I'll set it to 3 for now and observe for a while for any reoccurence of those WARNs if any . (CM recommends to set it between a value equal or greater than replication factor and lesser than number of DNs) Regards
... View more
04-06-2018
01:05 AM
Hi, This does not seem to have worked with a latter version of CDH (5.13.1). There we had to set this through - YARN Client Advanced Configuration Snippet (Safety Valve) for yarn-site.xml So, what is the correct way to set this? Is this really changed with newer releases? Thanks, Sumit
... View more
12-29-2017
02:04 AM
hello.everyone. the first step in ssh-action is "source /home/oracle/bash_profile",and then you can run oracle command. thanks
... View more
12-20-2017
08:36 PM
Thanks for quick reply.
... View more