Member since
01-08-2014
88
Posts
15
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5083 | 10-29-2015 10:12 AM | |
4941 | 11-27-2014 11:02 AM | |
5229 | 11-03-2014 01:49 PM | |
2926 | 09-30-2014 11:26 AM | |
7003 | 09-21-2014 11:24 AM |
04-05-2016
06:23 AM
Please open a new discussion thread for your issue. Older solved threads are unlikely to receive an appropriate amount of attention. I'd recommend you post your MapReduce issue over in the batch processing forum. Be sure to include you version of CDH, a complete stack trace, and the command you used to launch the job.
... View more
10-30-2015
09:58 AM
5.4.8 has been released. http://community.cloudera.com/t5/Release-Announcements/Announcing-Cloudera-Enterprise-5-4-8/m-p/33614#U33614
... View more
07-16-2015
08:32 PM
This works. But, if you used CM to install, use CM to change the HDFS setting dfs.replication.max via configuration tab, first. Then use the accumulo shell as directed. Jim Heyssel
... View more
02-03-2015
07:58 AM
1 Kudo
Each file uses a minimum of one block entry (though that block will only be the size of the actual data). So if you are adding 2736 folders each with 200 files that's 2736 * 200 = 547,200 blocks. Do the folders represent some particular partitioning strategy? Can the files within a particular folder be combined into a single larger file? Depending on your source data format, you may be better off looking at something like Kite to handle the dataset management for you.
... View more
11-27-2014
08:37 PM
Hi, Thanks everyone for your valuable inputs. I finally was able to troubleshoot the problem with my cluster. As it turns out the remote machine needed to add the ip-address of the nodes in the cluster in their /etc/hosts/ file. After doing that I was able to get the desired results. Thanks everyone!!! Vaibhav
... View more
11-03-2014
01:49 PM
1 Kudo
Current versions of Spark don't have a spark-assembly jar artifact (see for example maven central for upstream). The assembly is used internally by distributions when executing Spark. Instead you should have a dependency for whichever part of Spark you make use of, e.g. spark-core.
... View more
09-22-2014
05:22 PM
Hi! I'd be happy to help you with this new problem. To make things easier for future users, how about we mark my answer for the original thread topic and start a new one for this issue?
... View more
06-26-2014
11:13 PM
1 Kudo
Kevin , I do have a proxy settings . When i disabled that i am able to browse the name node UI . Is there any possibility of using the both .
... View more
06-26-2014
10:06 AM
Thank you very much busbey... i will do the upgrade from cdh4 -> cdh5. Best Regards, Bommuraj
... View more