Member since
06-26-2013
416
Posts
104
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6831 | 03-23-2016 08:06 AM | |
11764 | 10-12-2015 01:56 PM | |
4021 | 03-05-2015 11:11 AM | |
5618 | 02-19-2015 02:41 PM | |
10715 | 01-26-2015 09:55 AM |
11-23-2016
07:34 AM
Thank you for the feedback, @obar1. We definitely appreciate it and will continue to evaluate our options for incorporating your suggestions.
... View more
09-19-2016
01:40 PM
5 Kudos
The Cloudera Community has been helping users find answers to their Hadoop questions for over three years now. Over that time the community, along with the Hadoop ecosystem, has grown and changed in many ways. As a result, we are assessing the current and future state of this community to ensure that we are offering an environment that best meets the needs of our users. This is where we need your help. We are opening this thread to collect your feedback on what you like, what can be improved and any new features you would like on the community.
Some potential areas we would like feedback on are:
Segmentation of the forums - Currently the boards are mostly based on components and their function. This makes it difficult to know where to post your new question and can also reduce the number of people who see your topic.
Discussion style: is the current discussion thread style working for you? Are there examples of other communities with a more intuitive style?
Your total community hub: we are considering a large revamp of how you interact with us and find content and connections that you need. Instead of just being discussion boards, would you like a single portal for all your community needs? Including local meetup and conference registration, documentation, rich media tutorials, blogs, contests and fun rewards like quests, points, and badges that can earn you not only reputation but other real-world benefits?
Are there other ways we can make it easier and more fun to interact with Cloudera, our content, and your peers?
Improvements to current features - What features do we have that you would like to improve? A few examples:
Improved site search
Advanced user profile linked to your social media and allowing you to add and connect with others based on skills/interests
Personalizable views/home screens
Anything else you would like to add.
... View more
05-12-2016
10:46 AM
1 Kudo
If you happen to be in the Austin area, you should consider participating in our Hackathon this coming Sunday May 15. Hosted at the Cloudera office and sponsored by Cloudera Cares.
The hackathon will focus on reducing mosquito borne virus infections by analyzing data on water, mapping mosquito travel, and historical virus analysis.
515 Congress Ave., Suite 1212, Austin, TX
10:30am to 8:00pm
RSVP and further details here.
... View more
04-20-2016
12:23 PM
Well, it's been a couple years since I supported HBase, but what we used to do is delete all the znodes in the /hbase directory in ZK EXCEPT for the /hbase/replication dir. You just have to be a little more surgical with what you're deleting in that RIT situation, IF you're using the Hbase replication feature to back your cluster up to a secondary cluster. If not, the previous advice is fine.
Ultimately, regions should not get stuck in transition, though. What version of HBase are you running? We used to have tons of bugs in older versions that would cause this situation, but those should be resolved long ago.
... View more
04-20-2016
11:53 AM
Hey everyone, this is a great thread and I might be showing my "HBase age" here with old advice, but unless something has changed in recent versions of HBase, you cannot use these steps if you are using HBase replication.
The replication counter which stores the progress of your synchronization between clusters is stored as a znode under /hbase/replication in Zookkeeper, so you'll completely blow away your replication if you do an "rmr /hbase".
Please be super careful with these instructions. And to answer @Amanda 's question in this thread about why this happens with each upgrade, this RIT problem usually appears if HBase was not cleanly shut down. Maybe you're trying to upgrade or move things around while HBase is still running?
... View more
03-23-2016
08:06 AM
@mstepanov here's the background, if I remember correctly:
1) the reason rollbacks can be impossible in certain circumstances in HDFS is because sometimes the metadata is changed between versions in non-backward-compatible ways. Basically the namenode's fsimage and each HDFS block in the cluster get re-encoded with new bits of metadata that weren't available and therefore have no meaning in the older CDH version.
2) we don't always update the HDFS metadata in non-backward compatible ways between releases, but when we do it's usually major releases (eg. CDH4 to CDH5).
3) If no HDFS metadata changes were made between your original and upgraded version, rollback is possible in limited fashion.
4) Even if metadata changes were made, you can still rollback IF you haven't finalized the upgrade yet. See this doc for rollback procedures between CDH5 and CDH4.
Your best bet is to have a solid upgrade plan and test it thoroughly (UAT, QA, and regression) in a similarly configured staging environment prior to upgrade.
... View more
03-22-2016
08:16 AM
@NelsonRonkin, would you be so kind as to start a new topic for your query? I believe since it is slightly different from the original poster's issue, folks may be missing your replies. This might help you get a faster response.
Thank you
... View more
02-25-2016
09:44 AM
@megrez80, my tendency here is to take a step back and look at the root cause of your error. I agree that you are probably pushing the limits of what our system requirements are for testing, and maybe something like Cloudera Live would be a better option for a true POC, but I'm not convinced your problem is a memory problem.
The error clearly mentions that it can't open the logs directory and you stated that there is no /cmf/role/17/logs directory on your system, so let's start there. Why did the directory not get created during installation? There should be an install log located in your /tmp filesystem which captured what happened when CM was installing.
Forgive me for not remembering the exact file name, but it's something along the lines of scm-install.log. If you can find that file, it might contain some valuable information as to why the correct directories were not created.
... View more
01-12-2016
09:42 AM
3 Kudos
Symptoms
"Permission denied" errors can present in a variety of use cases and from nearly any application that utilizes CDH.
For example, when attempting to start the jobtracker using this command
service hadoop-0.20-mapreduce-jobtracker start
You may see this error, or one similar
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:669)
While the steps to reproduce this error can vary widely, the root causes are very well defined and you'll know you're suffering from this issue by finding the following line either on stdout or in the relevant log files:
org.apache.hadoop.security.AccessControlException: Permission denied: user=XXX, access=WRITE, inode="/someDirectory":hdfs:supergroup:drwxr-xr-x
Applies To
CDH (all versions), Mapreduce, HDFS, other services that rely on reading from or writing to HDFS
Cause
Access to the HDFS filesystem and/or permissions on certain directories are not correctly configured.
Troubleshooting Steps
There are several solutions to attempt:
1) The /user/ directory is owned by "hdfs" with 755 permissions. As a result only hdfs can write to that directory. Unlike unix/linux, hdfs is the superuser and not root. So you would need to do this: sudo -u hdfs hadoop fs -mkdir /user/,,myfile,, sudo -u hdfs hadoop fs -put myfile.txt /user/,,/,, If you want to create a home directory for root so you can store files in his directory, do: sudo -u hdfs hadoop fs -mkdir /user/root sudo -u hdfs hadoop fs -chown root /user/root Then as root you can do "hadoop fs -put file /user/root/". 2) You may also be getting denied on the network port where the NameNode is supposed to be listening:
Fix this by changing the address that the service is listening on in /etc/hadoop/conf/core-site.xml. By default your NameNode may be listening on "localhost:8020." (127.0.0.1)
So to be clear, implement this value for the following property:
<property> <name>fs.defaultFS</name> <value>hdfs://0.0.0.0:8020</value> </property>
then bounce the service with hadoop-hdfs-namenode restart optional: validate with netstat -tupln | grep '8020'
References
... View more