Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who Agreed with this topic

Hbase performance issue

New Contributor

Hi All,

 

I have setup of hadoop cluster(CDH5.4.4) with hbase and phoenix. I see these warnings in my regionserver logs:

 

2016-06-27 10:28:51,987 INFO org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 477 ms, current pipeline: [DatanodeInfoWithStorage[XXX.xx.xx.xxx:50010,DS-bf645817-6164-4723-952a-161a65da092d,DISK], DatanodeInfoWithStorage[XXX.XX.xx.xxx:50010,DS-dd3f32e1-9b62-4f95-b1e2-5fd1a1c2a6e2,DISK]]

I have analyzed the logs slow sync cost goes upto 2sec.

I also see these warnings:

2016-06-27 10:28:51,987 WARN org.apache.hadoop.hbase.ipc.RpcServer: (responseTooSlow): {"processingtimems":540,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"XXX.XXX.XXX.XXX:49410","starttimems":1467003531447,"queuetimems":0,"class":"HRegionServer","responsesize":14,"method":"Multi"}  

This call is coming from client for PUT request.

 

I see these warnings in my datanode logs:

2016-06-27 10:47:51,162 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 380ms (threshold=300ms), isSync:false, flushTotalNanos=379910826ns

2016-06-27 10:47:51,162 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:400ms (threshold=300ms)

2016-06-27 10:47:51,162 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:397ms (threshold=300ms)

 

 

 

  • My cluster configuration is: 4 datanodes, 2 namenodes. 
    • Datanode Configuration: 64gb RAM, NON-RAID Disks 4.5TB each, 32 cores 
    • Namenode Configuration: 32gb RAM, 32 cores 
    • Regionservers co-hosted with datanode
    • HBase master cohosted with namenode
    • Hbase regionserver XMX-24 gb
  • I would appreciate any pointers for resolving this issue.
Who Agreed with this topic