Member since
08-19-2013
392
Posts
29
Kudos Received
9
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3279 | 09-12-2019 01:04 PM | |
| 3373 | 08-21-2019 04:56 PM | |
| 9045 | 07-03-2018 07:59 AM | |
| 6995 | 10-09-2015 08:02 AM | |
| 3094 | 04-29-2015 12:14 PM |
10-30-2015
09:58 AM
5.4.8 has been released. http://community.cloudera.com/t5/Release-Announcements/Announcing-Cloudera-Enterprise-5-4-8/m-p/33614#U33614
... View more
10-29-2015
09:52 AM
HDFS-8384 is fixed in CDH 5.3.8 per the release notes but is not in CDH 5.4.7. It should be available in CDH 5.4.8 when it releases.
... View more
10-09-2015
08:02 AM
2 Kudos
You are running with Racks defined on the hosts. hdfs-8.xxx belongs to rack /dc19 hdfs-9.xxx belongs to rack /dc13 The rests of the hosts are in rack /default. This is an incorrect usage of racks. To avoid breaking rack placement the balancer is not able to move blocks off of these two hosts. Consider changing these two hosts to the /default rack.
... View more
04-29-2015
12:14 PM
2 Kudos
The HDFS Balancer only balances blocks between DataNodes. It does not to any balancing on individual DataNodes between drives. You can set the DataNode Volume Choosing Policy (dfs.datanode.fsdataset.volume.choosing.policy) to Available Space (org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy). This will cause the DataNodes to write new blocks to the drive with the most space available. It does not affect blocks that have already been written. For your question about wiping one datanode at a time, it would be better to decommission and then recommission a node. With a replication factor of 3 you may perform this action on 2 nodes at a time.
... View more
04-23-2015
10:01 AM
3 Kudos
1. Stop HBase 2. Move your original /hbase back into place 3. Use a zookeeper cli such as "hbase zkcli"[1] and run "rmr /hbase" to delete the HBase znodes 4. Restart HBase. It will recreate the znodes If Hbase fails to start after this, you can always try the offline Meta repair: hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair Also check for inconsistencies after HBase is up. As the hbase user, run "hbase hbck -details". If there are inconsistencies reported, normally I would use the "ERROR" messages from the hbck output to decide on the best repair method, but since you were willing to start over just run "hbase hbck -repair". If the above fails, you can always try the offline Meta repair: hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair [1] http://hbase.apache.org/book.html#trouble.tools [2] http://hbase.apache.org/book.html#hbck.in.depth
... View more
04-13-2015
08:21 AM
The error message you see would not prevent Cloudera Manager Server from starting. Just to clarify: Currently you are unable to connect from your desktop/laptop to port 7180 on the host running Cloudera Manager Server? The above post indicates Cloudera Manager Server is listening for requests on *:7180, so you should be able to connect with http://<cm_server>:7180/ If you are not able to connect: 1. Make sure iptables is turned off 2. See if you can connect with telnet: telnet <cm_server> 7180 The error message you see in the log indicates the background thread is attempting to locate a parcel repository from one of the defined parcel repo URLs. This would not prevent Cloudera Manager Server from starting but would indicate the host where Cloudera Manager Server is running may not have access to the internet. The default parcel URLs attempt to connect to http://archive.cloudera.com
... View more
03-17-2015
10:05 AM
Check the ZooKeeper "Maximum Client Connections" (maxClientCnxns) property. This often defaults to 60 and should be raised to 300. This value sets a per-host limit on connections, so if there are more connections to the ZooKeeper from processes on an individual host it will begin rejecting connections. The canary test is to connect to the ZooKeeper, create a znode, and delete the znode.
... View more
02-01-2015
08:46 AM
At the present time there is no functionality in HDFS to do a per-disk balancing on a DataNode. A long-standing Jira has been opened to add this functionality in a future release: https://issues.apache.org/jira/browse/HDFS-1312
... View more
01-12-2015
02:04 PM
2 Kudos
Creating a user in Hue creates the user only in the Hue User database. Creating a user in Cloudera Manager create the user only the the Cloudera Manager User table. Both the user and the groups need to exist on the NameNode host operating system: sudo useradd Peter sudo usermod -G developer Peter If you don't want the user to be able to log in to the NameNode: sudo usermod -s /bin/falso Peter or sudo usermod -s /usr/bin/nologin Peter
... View more
- « Previous
- Next »