Member since
10-02-2017
14
Posts
0
Kudos Received
0
Solutions
10-16-2018
09:37 AM
Hi Guys, I am facing similar issue. I have a new installation of Cloudera and i am trying to run a simple Map reduce Pi Example. The job gets stuck at the map 0% and reduce 0% step as shown below. test@spark-1 ~]$ sudo -u hdfs hadoop jar /data/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/10/16 12:33:25 INFO input.FileInputFormat: Total input paths to process : 10
18/10/16 12:33:26 INFO mapreduce.JobSubmitter: number of splits:10
18/10/16 12:33:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1539705370715_0002
18/10/16 12:33:26 INFO impl.YarnClientImpl: Submitted application application_1539705370715_0002
18/10/16 12:33:26 INFO mapreduce.Job: The url to track the job: http://spark-4:8088/proxy/application_1539705370715_0002/
18/10/16 12:33:26 INFO mapreduce.Job: Running job: job_1539705370715_0002
18/10/16 12:33:31 INFO mapreduce.Job: Job job_1539705370715_0002 running in uber mode : false
18/10/16 12:33:31 INFO mapreduce.Job: map 0% reduce 0% I made multiple config changes, but cannot find a solution for this. The only error i could trace was in the nodemanager log file as below : ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: RECEIVED SIGNAL 15: SIGTERM I tried checking various properties discussed in this thread, but i still have that issue. Can someone please help in solving this issue? Please let me know what all details i can provide.
... View more
06-19-2018
04:17 AM
2 Kudos
@Sandeep Ahuja, textFile() partitions based on the number of HDFS blocks the file uses. If the file is only 1 block, then RDD is initialized with minimum of 2 partitions. If you want to increase the minimum no of partitions then you can pass an argument for it like below files = sc.textfile("hdfs://user/cloudera/csvfiles",minPartitions=10) If you want to check the no of partitions, you can run the below statement files.getNumPartitions() Note: If you set the minPartitions to less than the no of HDFS blocks, spark will automatically set the min partitions to the no of hdfs blocks and doesn't give any error. . Please "Accept" the answer if this helps or revert back for any questions. . -Aditya
... View more
03-16-2018
10:38 PM
It appears that your HMaster is crashing out during startup. Take a look at the HMaster log file under /var/log/hbase/ to investigate why. If you are able to run the configured ZK properly, check if the /hbase znode appears on it.
... View more
12-25-2017
12:17 AM
Hi @SandyCT, well, this system is broken a bit more than I expected, since owner of groups is also damaged. What did you run exactly? If I had to guess, some recursive chmod on /, or /etc? Before you try this last option, try switching to console (ctrl+alt+F1 on a normal pc, not sure about the vm), and logging in as root, with password "cloudera". If this does not work, for whichever reason, here's a way to reboot Centos 6 in "safe mode". I suggest you make a backup of the whole vm file/directory first. https://lintut.com/reset-forgotten-root-password-in-centos/ If this does not work (I cannot test now, since I don't have my vm around), replace " 1 " in the tutorial with "rw init=/bin/bash" In either case, this will grant you root, but fixing your vm might take a while. For example, your sudo command should be "---s--x--x", or something to that regard, /etc/sudoers "-r--r-----", and "/etc/group" -rw-r--r--. Have fun & good luck! 🙂
... View more
10-02-2017
06:58 PM
Hi Penta, Did it work? Actually Im facing the same issue and this is what I have used: a1.sources.Twitter.consumerKey=XXX a1.sources.Twitter.consumerSecret=XXX a1.sources.Twitter.accessToken=XXX a1.sources.Twitter.accessTokenSecret=XXX I am trying to run the flume agent in cloudera VM. Please advice if you or anyone know the solution. Appreciate your suggestions/help!
... View more