Member since
02-24-2015
7
Posts
0
Kudos Received
0
Solutions
04-14-2017
09:58 PM
sudo -u accumulo ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --instance-name hdp-accumulo-instance --clear-instance-name The above command takes very long time and not complete. i could not find any log in /var/log/accumulo/master_stlrx2540m1-108.rtpppe.netapp.com.log. Is anyother place to look for log? can you please help me to re-initialize the accumulo? regards, Karthikeyan
... View more
07-27-2015
08:22 AM
Hello Harsh, I tried with ext4 as well. Still the slow block receiver write data to disk warning is logged in datanode log. Do you have any best practice guidelines for Performance turning at OS, Network, HDFS, Server Hardware, filesystem specific? I need performance tuning guidelines for HDFS block from OS to Disk which will help to find the problametic area. can you throw some light? regar
... View more
07-17-2015
08:54 AM
Hello Harsh, Thanks for your response. Is there any specific reason to recommend ext4. I would like to justify whether ext4 provides better performance than xfs. Can you please provide your input?
... View more
07-17-2015
07:10 AM
The testdfsio write operation takes longer time in a 8 datanode with 48GB RAM and 16 CPU Core with 10GigE. The 10GigE network port utilization 1.5 to 2 gigabits/sec for each datanode during write operation. xfs filesystem config: parted -s /dev/sdb mklabel gpt mkpart /dev/sdb1 xfs 6144s 10.0TB /sbin/mkfs.xfs -f -L DISK1 -l size=128m,sunit=256,lazy-count=1 -d su=512k,sw=6 -r extsize=256k /dev/sdb1 mkdir /disk1 mount /disk1 /sbin/blockdev --setra 1024 /dev/sdb df -h /disk1" /etc/fstab config: LABEL=DISK1 /disk1 xfs allocsize=128m,noatime,nobarrier,nodiratime 0 0 The /var/log/hadoop-hdfs/hadoop-cmf-hdfs-DATANODE-hadoop1.com.log.out shows For dfs.blocksize=128m 2015-07-14 17:47:16,391 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:614ms (threshold=300ms) 2015-07-14 17:47:16,400 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:623ms (threshold=300ms) 2015-07-14 17:47:16,401 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:548ms (threshold=300ms) 2015-07-14 17:47:16,420 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:567ms (threshold=300ms) For dfs.blocksize=512m 2015-07-17 09:46:28,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 408ms (threshold=300ms) 2015-07-17 09:46:28,999 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 448ms (threshold=300ms) 2015-07-17 09:46:29,009 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 409ms (threshold=300ms) 2015-07-17 09:46:29,009 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror took 451ms (threshold=300ms) Please throw some light to fix the above issue regards, Karthik
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS