Member since
09-01-2014
23
Posts
0
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1573 | 09-09-2014 02:16 AM | |
3370 | 09-08-2014 08:30 PM | |
1249 | 09-08-2014 04:23 AM | |
5794 | 09-05-2014 03:50 AM |
05-07-2015
03:49 AM
Hi a newbie here, so did you ever solve this problem? I'm currently having the same issue. Thank you.
... View more
09-09-2014
02:16 AM
Solved this issue. My problem was simple, My NodeManager log directory setting is not pointing to the container. yarn.nodemanager.log-dirs=/apps/ext/var/log/hadoop-yarn it was suppopsed to be yarn.nodemanager.log-dirs=/apps/ext/var/log/hadoop-yarn/container I figured it out after I see this message on my NodeManager web ui, under Node Log information, "NodeHealthReport 1/1 log-dirs turned bad" I was able to run and finish the PiEstimator after I update the log-dirs and restart the service.
... View more
09-08-2014
10:30 PM
I was following the Installation Test guide below: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Cloudera-Manager-Installation-Guide/cm5ig_testing_the_install.html I ran this command from one of my host: sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100 This is what I get: [apps@analyticpapp2 ~]$ sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100 Number of Maps = 10 Samples per Map = 100 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 14/09/09 10:09:53 INFO client.RMProxy: Connecting to ResourceManager at analyticpapp1/xx.x.xxx.xx:8032 14/09/09 10:09:54 INFO input.FileInputFormat: Total input paths to process : 10 14/09/09 10:09:54 INFO mapreduce.JobSubmitter: number of splits:10 14/09/09 10:09:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1410232070158_0001 14/09/09 10:09:55 INFO impl.YarnClientImpl: Submitted application application_1410232070158_0001 14/09/09 10:09:55 INFO mapreduce.Job: The url to track the job: http://analyticpapp1:8088/proxy/application_1410232070158_0001/ 14/09/09 10:09:55 INFO mapreduce.Job: Running job: job_1410232070158_0001 and it just stops. nothing happen. My cluster consist of 10 hosts. On 9 hosts, I've set 6 services on each: - HBase regionserver - HDFS datanode - Hive gateway - Impala daemon - Spark worker - Yarn nodemanager On 1 Host (Head Node), I run the following services: - HBase master - HDFS namenode - HDFS secondarynamenode - Hive hivemetastore - Hive hiveserver2 - Hive gateway - Hue server - Impala catalogserver - Impala statestore - HBase indexer - Oozie server - Solr server - Spark master - Sqoop server - Yarn jobhistory - Yarn resourcemanager - Zookeeper server Any advice? Thanks in advanced.
... View more
09-08-2014
08:30 PM
Thanks for the reply. I tried the steps you suggested, but the problem still remains. I've solved this solution by trying the following steps provided by JohnKelly of CDH Google Group combined with the CDH uninstall guide: 1. Follow this instruction: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh5ig_cdh_comp_uninstall.html 2. I ran the following command (credits to JohnKelly): rpm -e --allmatches $(rpm -qa | grep -e^hadoop -e^cloudera -e^hue -e^oozie -e^hbase -e^impala -e^flume -e^hive) yum clean all rm -Rf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera* sudo rm -Rf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera* rm /tmp/.scm_prepare_node.lock After that everything was good.
... View more
09-08-2014
08:15 PM
Nevermind, the log storage setting for HBase is available on CM. I didn't realilze those arrows next to category label can be expanded. And there will be 'Log' category which I can update the value.
... View more
09-08-2014
04:23 AM
Please disregard this post. Dumb post. I didn't realize those arrows for each category can be expanded, which gives me the ability to change the log directory. Sorry.
... View more
09-08-2014
12:35 AM
/var/log/hbase I've restarted the entire cluster. Now it's up again. But the problem still remains that I'm getting bad health alert because of /var/log disk space is insufficient. I want to change the log directory for hbase to my /apps/ext/var/log directory. But how can I do that? There are no log storage setting for Hbase in CM. Thanks for the reply.
... View more
09-07-2014
11:10 PM
The configuration at CM don't allow this. I tried to change it on /etc/default/impala. but nothing happen after restarting impala-state-store and catalog via CM. Thanks.
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
09-07-2014
10:24 PM
I've just completed with CM and CDH installation and trying to add HBase service to the Master node. This is not production environment no data has been loaded yet; pretty much a clean install. After adding Hbase service the first to Master Node, I got a bad health alert saying that my disk space is not sufficient. So, what I tried to do was to move hbase log directory to a different location using symlink. I realized that when doing this I forgot to stop the Hbase service. After I'm done with the symlink, I restart the Hbase service and fail. I got the error below. Question, the Hbase service configuration at CM don't have option to move or update the log directory. What is your suggestion to do this in the future? thanks in advance. Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2775) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:184) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2789) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:489) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:468) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1233) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1211) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:174) at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:167) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:472) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2770) ... 5 more
... View more
Labels:
- Labels:
-
Apache HBase
09-05-2014
03:50 AM
I've solved this issue. Cloudera Embedded DB cannot write because it is located in my /var directory (under /var/lib/cloudera-scm-server-db) which is full. I use this step-by-step solution (credits to puneethabm from google group): 1) Stop the scm service: # service cloudera-scm-server stop # service cloudera-scm-server-db stop 2) Copy the cloudera-scm-server-db, retaining the permissions as below: #cd /var/lib #cp -rp cloudera-scm-server-db /dir1/lib/ #cd /var/lib/cloudera-scm-server-db #rm -rf data/ 3) Create symlink #cd /var/lib/cloudera-scm-server-db #ln -s /dir1/lib/cloudera-scm-server-db/data data 4) Start services: # service cloudera-scm-server-db start DB initialization done. waiting for server to start.... done server started # service cloudera-scm-server start By creating symlink to disk with sufficient free space, it allows Cloudera Embedded DB to perform write transaction again.
... View more