Member since
10-31-2014
31
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4647 | 04-23-2015 03:10 AM |
02-10-2017
12:32 AM
1 Kudo
In your case, I think the issue is that the "kadmin" user doesn't exist in linux (or at least not in all nodes)
... View more
04-20-2016
12:26 PM
Nowadays there is a "clean" operation in the shell admin utilities that can be used to remove data files, zk data or both. I guess that tool has in consideration what you are pointing out
... View more
01-29-2016
10:35 PM
An exit code -1 means Java crashed, normally that is due to classpath or memory settings
... View more
10-09-2015
12:22 AM
You'll have to view the logs of the YARN node running the executor, it's not very obvious how to see the logs in the YARN console. If I had to make a wild guess, I would say the user you are running the job with doesn't exit in the node running the executor.
... View more
09-29-2015
11:04 AM
You have to use the command line. Should be something like this: #Start the command line and connect to any of the zk servers # if you are not using CDH then the command is zk-cli.sh #if your clluster is kerberized you need to kinit before, otherwise the delete will fail zookeeper-client -server localhost:2181 #Once in the shell run this to delete the directory with metadata rmr /hbase
... View more
08-09-2015
02:45 AM
1 Kudo
@EugeneM wrote: I have tried your steps, but I still have inconsistancies and hbck -repair does not work. My inconsistancies are with data tables and not with META. I get the following error message: INFO util.HBaseFsckRepair: Region still in transition, waiting for it to become assigned and it eventually times out. I am using CDH 5.4.4 with Hbase 1.0.0. I cannot do anything on Hbase (count, scan etc.). In your case, if "hdfs fsck" doesn't fix the files; you are going to have to delete the corrupted hdfs table files. If you can load the data again, probably the best thing is to delete the /hbase directory in hdfs altogether, restart and load the data again.
... View more
06-18-2015
12:54 AM
1 Kudo
You are not specifying the jar that contains that class (the examples jar). It could be that the jar is included automatically in local mode but not in the yarn classpath. Have a loot at the nodemanager log for the node that tried to run it to veryfy if it's a classpath issue
... View more
04-23-2015
03:10 AM
Ok, you have to build a MapReduce job and run it. Basically you need to implement a few interfaces (maper, reducer), create a jar with that code and submit it either command line or from Java create, configure and start a Job. See this tutorial: https://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
... View more
04-23-2015
02:38 AM
1 Kudo
If restarting the master din't do anything I would say the hbase znode is messed up in zookeeper. If you have nothing to lose stop hbase, delete the znode in zookeeper, delete the hbase folder in hdfs and start hbase.
... View more
04-23-2015
01:50 AM
Sounds like a bug, check the bugs reported about transitions and compare with the version you are using. If repair doesn't work, the only solution I see is to do a snapshot, truncate the table and then import the snapshot (maybe try to import into another table before you truncate the main one).
... View more