Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

DATA_NODE_WEB_METRIC_COLLECTION has become bad

avatar
Explorer

Dear all,

Version: Cloudera Express 5.0.2
3 master nodes
15 workers

Problem:
"The health test result for DATA_NODE_WEB_METRIC_COLLECTION has become bad: The Cloudera Manager Agent is not able to communicate with this role's web server." 

When above alert pops up such record were noticed in datanode logs:
"INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3121ms"

 

Alerts are throwing from specific group of datanodes, not from all.

 

What can be the problem here?

Thanks in advance

Sergey

1 ACCEPTED SOLUTION

avatar

From what I understand till now, the issues only appear on datanodes which are containing a large number of blocks and these datanodes contain far more blocks than the healthy ones. This can be remedied by running the HDFS Balancer.

 

In CDH 5.x, bug HDFS-6621 affects balancer performance. It is fixed in the GA releases 5.1.4 and 5.2.0 (and later versions like 5.3.0). It is not fixed in any 5.0.x version. So please consider upgrading to one of the above releases for the fix.

 

Regards,
Gautam Gopalakrishnan

View solution in original post

5 REPLIES 5

avatar
It is possible that the datanode is handling more blocks or dealing
with more traffic than its heap will allow. So there might be frequent
full garbage collection occurring which can cause such events.

How many blocks do these datanodes have? What is the heap setting?

Regards,
Gautam Gopalakrishnan

avatar
Explorer

Yes one of my idea is about skewed data usage across datanodes. 

I explored the data usage of nodes and noticed that those workers which triggers alerts have more block usage

bellow is comparison of sane nodes with the alerting ones

 

sane group

 

Capacity Used Non DFS Used Remaining Blocks Block pool used

14.21 TB1.64 TB664.86 GB11.92 TB1272201.64 TB (11.55%)

 

14.21 TB6.14 TB666.38 GB7.42 TB6399186.14 TB (43.23%)

 

14.21 TB4.99 TB665.79 GB8.57 TB4651644.99 TB (35.11%)

 

14.21 TB7.06 TB666.4 GB6.49 TB7955567.06 TB (49.71%)

 

14.21 TB4.74 TB665.74 GB8.82 TB4456554.74 TB (33.35%)

 

14.21 TB7.95 TB666.13 GB5.61 TB9077307.95 TB (55.96%)

 

14.21 TB6.13 TB666.08 GB7.43 TB6406316.13 TB (43.12%)

 

 

 

group with issues

Capacity Used Non DFS Used Remaining Blocks Block pool used

 

10.65 TB8.96 TB500.07 GB1.2 TB11750538.96 TB (84.13%)

 

10.65 TB8.57 TB499.76 GB1.59 TB11366878.57 TB (80.51%)

 

14.21 TB8.94 TB666.97 GB4.62 TB12096088.94 TB (62.89%)

 

10.65 TB8.65 TB500.16 GB1.5 TB11331448.65 TB (81.28%)

 

14.21 TB8.98 TB665.07 GB4.58 TB12257078.98 TB (63.19%)

 

10.65 TB8.62 TB499.82 GB1.54 TB11682578.62 TB (80.98%)

 

10.65 TB8.94 TB499.75 GB1.22 TB11721988.94 TB (83.98%)

 

 

Notable that the ill ones have more blocks in the pool.

 

 

Heap size for DataNode Default Group - 1gb

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

avatar
It might be best to run the HDFS Balancer on a regular basis to remedy this. If you're running CDH 5.0.x or CDH 5.1.[0-3]. then consider upgrading to CDH 5.1.4 or CDH 5.2.0 for the fix to HDFS-6621.
Regards,
Gautam Gopalakrishnan

avatar
Explorer

Hi Guatam,

Yes we run balancer on regular basis but seems we are hitting this bug. We have plans to upgrade CM stack but is the current issue related to balancer bugs?

Is there some relation between skewed balancer and web metrics alerts?

 

Thanks

Sergey

avatar

From what I understand till now, the issues only appear on datanodes which are containing a large number of blocks and these datanodes contain far more blocks than the healthy ones. This can be remedied by running the HDFS Balancer.

 

In CDH 5.x, bug HDFS-6621 affects balancer performance. It is fixed in the GA releases 5.1.4 and 5.2.0 (and later versions like 5.3.0). It is not fixed in any 5.0.x version. So please consider upgrading to one of the above releases for the fix.

 

Regards,
Gautam Gopalakrishnan