Member since
06-26-2013
416
Posts
104
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6837 | 03-23-2016 08:06 AM | |
11782 | 10-12-2015 01:56 PM | |
4030 | 03-05-2015 11:11 AM | |
5623 | 02-19-2015 02:41 PM | |
10743 | 01-26-2015 09:55 AM |
11-04-2013
02:22 PM
You are correct. It's either the NFS-based shared edits directory OR the QJM-based HA config.
... View more
11-04-2013
01:53 PM
Yes, that's the way the process still happens. Once you get the HDFS service installed and running, setting up HA is a separate workflow allowing you to choose your fencing mechanism, manual or automatic failover, quorum journal nodes, etc.
I believe this is the doc you will need.
... View more
11-01-2013
05:14 PM
5 Kudos
@happynodes I have moved your thread to the Cloudera Manager board, because you mentioned that you were using Parcels and as far as I know, that is a CM specific packaging model.
To answer your question, there are mechanisms in CM all over the place to control the size and number of log files that are retained. Please be aware that each and every Hadoop service, as well as the Cloudera Manager management/monitoring services will all keep their own logs, and by default those end up in /var/log.
For example, if you browse to your hdfs1 service page and click on "Configuration->View and Edit". On the left-hand side, you will be able to expand several menus and see "Logs" sections, which allow you to configure the logging of that service. Datanode, Failover Controller, and Namenode are a few examples of such.
What I would recommend is to try to identify which actual directory under /var/log is the culprit here and then go into CM and adjust the log retention settings for that service.
... View more
10-31-2013
10:22 AM
1 Kudo
The hdfs-site.xml file that you are viewing in CM which @smark helped you to find, resides on the local filesystem of that remote datanode. It will not be in /etc/hadoop/conf, though (unless you re-deploy your client configs to that machine), as CM maintains its own configuration directory in /var/run/cloudera-scm-agent/process for the roles that it manages. You will find the hdfs-site.xml file under that directory in the latest ???-Datanode directory.
... View more
10-30-2013
08:25 AM
Thank you for the report @ManishChopra , we had a temporary portal issue this morning, but we have resolved it now. Your feedback is very much appreciated!
... View more
10-28-2013
09:23 AM
Could it have something to do with old cache files not being cleaned out from before when you made the change? I think there is a mechanism for retiring these old files and moving them off/deleting them, but I'm not positive if that applies to the actual jobcache files.
Maybe this blog contains the clue?
http://blog.cloudera.com/blog/2010/11/hadoop-log-location-and-retention/
... View more
10-28-2013
08:54 AM
1 Kudo
When you are running on a single machine, you must set the "replication" factor (dfs.replication) to 1, since the default is 3 and there are not 3 datanodes in your cluster, HDFS will just sit there trying to replicate blocks that it cannot. See below from your fsck output:
Default replication factor: 3
Under-replicated blocks: 126 (100.0 %)
If you restart the cluster with replication set to one, the cluster should report healthy again.
... View more
10-21-2013
01:54 PM
do you have the zookeeper.quorum property I mentioned previously in your /etc/hbase/conf/hbase-site.xml file on these systems? It sounds like your hbase clients (any app trying to access the HBase service) are trying to use the default property for the ZK quorum, which would have them looking on the localhost for a ZK server. This is why it works on nodes that are running a ZK instance. You need a valid hbase-site.xml file on each node that specifies the ZK quorum. It was described in that link I posted. I hope that helps.
... View more
10-21-2013
12:26 PM
Can you give a bit more detail as to what you are doing when you encounter this error? And is the machine where you are seeing this one of those 8 nodes in the cluster? Or an external machine?
I've seen this before when a client app outside the cluster was unable to connect to the zookeeper quorum because a local copy of the hbase-site.xml file was not in the application's path therefore it did not know who the zookeeper servers were and the error looks like yours. The property that needs to be specified for the client is: hbase.zookeeeper.quorum.
http://hbase.apache.org/book/zookeeper.html
... View more
10-16-2013
12:34 PM
Thanks for closing the loop with us and posting back the solution, JakeZ
... View more