Member since
07-09-2015
18
Posts
1
Kudos Received
0
Solutions
08-17-2017
10:30 PM
@JaySenSharma Thanks! Your answer helped me!!
... View more
02-10-2017
10:02 PM
I had this same issue. I finally did an ambari-server upgrade and it worked ... I was not doing an upgrade. I was just restarting after booting up a node.
... View more
02-09-2017
07:13 PM
This solution worked for me. Thanks!
... View more
08-26-2016
06:23 PM
Makes sense, thanks a bunch!
... View more
08-25-2016
09:24 PM
Thanks a lot for the fast response. Any tips to make sure everything goes smoothly? A good time during installation of the new node to add the file? Thanks a bunch!
... View more
08-25-2016
09:12 PM
1 Kudo
Hi there- I have added a .jar file to HBase (to work with Apache Trafodion) directory. Example:/opt/hdp/hbase-regionserver/lib/trafodion-hbase.jar I have configuration settings added to HBase config to use this jar file for coprocessor actions. When I add a new node with Ambari this jar file is not copied over as apart of the add. This could cause problems during operation, as HBase is looking for that jar file. Is there some place I could copy this file, so that Ambari knows to copy it over? Thanks! Appreciate any suggestions.
... View more
Labels:
08-25-2016
02:08 PM
Hi there- I have added a .jar file to HBase (to work with Apache Trafodion) directory. Example:/usr/lib/hbase/lib/trafodion-hbase.jar I have configuration settings added to HBase config to use this jar file for coprocessor actions. When I add a new node with Cloudera Manager, this jar file is not copied over as apart of the add. This could cause problems during operation, as HBase is looking for that jar file. Is there some place I could copy this file, so that Cloudera knows to copy it over? Thanks! Appreciate any suggestions.
... View more
04-20-2016
12:18 PM
HI there Clint- What you would suggest be done when HBase gets a region stuck in transition? I am all ears! Thanks! Amanda
... View more
04-08-2016
04:38 PM
The comments on this JIRA helped me: https://issues.apache.org/jira/browse/AMBARI-9776
... View more
03-24-2016
02:55 PM
Hi there All- Had this issue today as well. None of the solutions I found were working ( I even did two uninstall/resintalls hoping that would fix it... NO). Finally figured it out after Sartners suggestion to look at that set of logs (I didn't even know those ones existed! THANK YOU). I had a symbolic link that was broken, I created it and was able to get installed. sudo ln -s /etc/hadoop/conf.cloudera.hdfs /etc/alternatives/hadoop-conf Hope this helps someone else!
... View more
02-29-2016
01:24 PM
Currently using CDH 5.4.5. I have a curl command that sets some HDFS settings that I need for my cluster. curl -k -X PUT -H 'Content-Type:application/json' -u $ADMIN:$PASSWORD --data \ '{ "roleTypeConfigs" : [ { "roleType" : "NAMENODE", "items": [ { "name" : "namenode_java_heapsize", "value" : "1073741824" } ] }, { "roleType" : "SECONDARYNAMENODE", "items":[ { "name" : "secondary_namenode_java_heapsize", "value" : "1073741824" } ] } ], "items": [ { "name":"dfs_namenode_acls_enabled", "value":"true" } ] }' \ $URL/api/v1/clusters/$CLUSTER_NAME/services/hdfs/config By doing this any other settings (namenode heap size) I have set by hand for HDFS on my cluster are wiped out. Is there a way that I can preserve the changes I have made my hand, and set settings I need using this rest api? Thanks!!
... View more
12-14-2015
12:53 PM
Hi there. I am trying to install Cloudera Manager, I grabbed the latest installer: http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin I am using CentOS 7.1: sudo cat /etc/centos-release CentOS Linux release 7.1.1503 (Core) This is supported from the OS support matrix: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/pcm_os.html Yet, I get "Your distribtution is not supported" when I try to install. Seems like no one else has had this issue from looking at the board. What am I missing here? Thanks!
... View more
09-29-2015
10:14 AM
" If you delete the /hbase directory in zookeeper, you might be able to keep the data." Thanks for response. I am not sure how I delete this just in zookeeper. Is there a command for that?
... View more
09-28-2015
01:59 PM
I have done 4 different upgrades (on 4 different clusters) and I get this error everytime. I have to wipe out /hbase and lose the data which is the exact opposite reason on why I am doing an upgrade. There must be a step missing from the upgrade instructions, I have followed them each time. I tried your solution here, and it doesn't work. When I try to restart HBase the master failes and I get this: Failed to become active master org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'. I run hbase hdck -fixVersionFile and it gets stuck on 5/09/28 20:51:58 INFO client.RpcRetryingCaller: Call exception, tries=14, retries=35, started=128696 ms ago, cancelled=false, msg=
... View more
08-11-2015
03:09 PM
I am also getting this same error when installing 5.3.4 packages. It says this has been solved but I see no solution. Thanks!
... View more