Member since
04-18-2016
175
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7583 | 08-29-2016 12:03 AM |
08-29-2016
12:03 AM
4 Kudos
Hi Gareeb, The error messages that are written to the logs is a known issue in CDH 5.8. These messages are actually harmless, though it is filling up the DataNode logs. The error message is related to a new feature HDFS-1312 "Re-balance disks within a Datanode". This new feature which allows balancing of data on disks within an individual datanode is not implemented in CDH 5.8 as yet. The only way forward at this point in time is to suppress this error message and ensure it is not filling the datanode logs. The loglevel of the "org.apache.hadoop.hdfs.server.datanode.DiskBalancer" can be changed to FATAL. There are two ways to do this. The first requires a restart of the cluster. The second does not require restart but the loglevel is not persisted after restart of datanode. Option 1) modify log4j settings. a) navigate to CM -> HDFS -> Configuraion ->DataNode -:> Advanced -> DataNode Logging Advanced Configuration Snippet (Safety Valve) b) add: log4j.logger.org.apache.hadoop.hdfs.server.datanode.DiskBalancer = FATAL c) restart DataNodes (rolling restart recommended) . Option 2) Modify setting on command line. This will not persist on restarts of the DataNode and you need to run the following command for each DataNode (as 'hdfs' user): a) Check current logging level: # hadoop daemonlog -getlevel host-10-17-80-33.coe.cloudera.com:50075 org.apache.hadoop.hdfs.server.datanode.DiskBalancer b) Set logging level for disk balancer to FATAL # hadoop daemonlog -setlevel host-10-17-80-33.coe.cloudera.com:50075 org.apache.hadoop.hdfs.server.datanode.DiskBalancer FATAL I believe this information will explain why this error message is displayed. Thanks. Sailesh
... View more