Community Articles
Find and share helpful community-sourced technical articles
Cloudera Employee

Hadoop release itself wraps typical testing tools, and there are many other popular benchmarks for testing Apache Hadoop. When it comes to HDP, some of the testing tool needs to be configured before can be used, and more need to revise the coding for it can work on HDP, for example in order have more metrics supported or in case of kerberos environment. This article introduce configuration or modification for testing tools working in HDP.

Let's start by going through typical service components form HDP, now I'm using HDP with FreeIPA kerberized.

HBase - YCSB

we begin with HBase because of more coding revise effort in bechmarking tools. Besides using HBase in-build evaluation tools org.apache.hadoop.hbase.PerformanceEvaluation to evaluation throughput and read/write performance, more popular tool is YCSB, which you download from git://

git clone git://

before compiling we need to do following code modification before it runs in kerberos environment. Here I just quick hard-code the necessary configuration for it getting credential. Maybe you can put all those info as parameters following command 'java' or './ycsb' which exactly executes the benchmarking. However, for saving time on error-prone command typing, hard-code can help us on focusing tuning benchmarking parameters instead of those credential configurations.

//modify source code
    config.set("", "Kerberos");
    config.set("zookeeper.znode.parent", "/hbase-secure");
    getProperties().setProperty("principal", "hbase/");
    getProperties().setProperty("keytab", "/etc/security/keytabs/hbase.service.keytab");
    if ((getProperties().getProperty("principal")!=null)
        && (getProperties().getProperty("keytab")!=null) &&
        "kerberos".equalsIgnoreCase(config.get(""))) {
      try {
        UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI(
      } catch (IOException e) {
        System.err.println("Keytab file is not readable or not found");
        throw new DBException(e);

And then remember aligned the dependent package with the same version it actually used in HDP. Be noted that pom.xml in sub-project 'hbase12', it references variable 'hbase12.version'

<dependency>  <groupId>org.apache.hbase</groupId>  <artifactId>hbase-shaded-client</artifactId>  <version>${hbase12.version}</version></dependency>

And then check the root pom, and modify the version to align with your HBase from HDP


Finally compiling to obtain executable jar file, and then launch it.

sh-3.2# cd /Users/chen.yang/workspace/YCSB 
sh-3.2# mvn -DskipTests package
sh-3.2# ssh cheny0
[root@cheny0 ~]# cd /tmp
[root@cheny0 tmp]# tar zxvf ycsb-hbase12-binding-0.13.0-SNAPSHOT.tar.gz 
[root@cheny0 tmp]# mv ycsb-hbase12-binding-0.13.0-SNAPSHOT /ycsb

//build conf dir
[root@cheny0 tmp]# mkdir -p /ycsb/hbase12-binding/conf
[root@cheny0 tmp]# scp /etc/hbase/conf/* ./

//execute load
[root@cheny0 tmp]# cd /ycsb/bin
[root@cheny0 tmp]# ./ycsb load hbase12 -P /ycsb/workloads/workloada -p columnfamily=f1 -p table=ycsb

//internally it translate into command
[root@cheny0 tmp]# java -cp "/etc/hbase/conf:/ycsb/lib/*" -db -P /ycsb/workloads/workloada -p columnfamily=f1 -p table=ycsb -load

HBase - YRegionsStats.rb

This is not powerful testing tool, but its useful for help you arrange out data distribution status across regions, it help you get insight of data skew which intensively affect your Hbase application and services performance. Its a ruby script that can be downloaded form

The pity is it fails to runs at very first time with following error hints

[root@cheny0 tmp]# hbase org.jruby.Main RegionsStats.rb
NameError: cannot load Java class org.apache.hadoop.hbase.HServerLoadget_proxy_or_package_under_package at org/jruby/javasupport/ at file:/usr/hdp/!/builtin/javasupport/java.rb:51(root) at RegionsStats.rb:15

The failure is because it try to load a deprecated class. So after deleting the 15th line, and it can work

//use ruby script to check balance
[root@cheny0 keytabs]# kinit -kt hbase.headless.keytab hbase-cheny@FIELD.HORTONWORKS.COM
[root@cheny0 keytabs]# cd /tmp
[root@cheny0 tmp]# hbase org.jruby.Main RegionsStats.rb
2017-08-24 00:49:18,816 INFO  [main] Configuration.deprecation: is deprecated. Instead, use fs.defaultFS
Hostname | regions count | regions size | 3 | 691 | 3 | 323 | 3 | 325
0 | hbase:acl,,1501955861021.8b9119de437d98b0e5ffdf2475d1493b.
0 | hbase:meta,,1
0 | hbase:namespace,,1501952068264.30b243b85880dd40396d7940d6813605.
161 | TestTable,000000000000000000005244,1503383284564.22f23efda0244448340e62d2d457ae3b.
162 | TestTable,00000000000000000000654925,1503383284564.d71893801deac51ee626a5900320f2de.
162 | TestTable,00000000000000000000917303,1503383284403.912d9ab2d345097f51cf2c1a6e29b7b9.
163 | TestTable,0000000000000000000078582,1503383284403.0d5b67dfc510b9dd4e940116cc5509fa.
345 | TestTable,,1503159051542.63000b3e289f8d32ee37ce6632e3e0ef.
346 | TestTable,00000000000000000000261843,1503159051542.f34eea103da422ab90607a06eb4d350f.

Kafka -

Besides tuning disk I/O performance of Kafka broker, another pretty big influence on performance coms from producer and subscribe implementation. Here we only talk about producer, since for each partition it keeps maintaining a queue for accumulate message in order to save rpc with broker by sending in batch. With this the threshold for message sent out is highly affect the I/O performance. I recommend using patch from Down the project and do compiling to get 'kafka-tools-1.0.0-SNAPSHOT.jar', then replace its counterpart in folder '/usr/hdp/current/kafka-broker/libs'. Before executing, do the following modification in


exec $(dirname $0)/


exec $(dirname $0)/

execute the benchmark

//for security testing
[kafka@cheny0 bin]$ kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/
[root@cheny0 bin]$ ./ --num-records 1000000 --record-size 1000 --topic test_topic --throughput 100000 --num-threads 2 --value-bound 50000 --print-metrics --producer-props compression.type=gzip security.protocol=SASL_PLAINTEXT

Its a really good tool for you to emulate and get sufficient metrics for studying the health. And there accompanies a bunch of equation to judge being healthy or not. For details, can be referred to youtube video and the slides in And believe me, the beauty lies in small modulate can raise big difference, try to use it and get more appreciation.

HDFS and MapReduce

Here I just wanna refer you to where has sufficient explain and tutorials. But one more thing should be alerted, the default testing scale is based on 1 terabytes, so if your cluster nodes has limited disk capacity, its strongly require you first considering decrease the value setting, and then do the benchmark. Or, it will ruin your good mood.

Okay guys, I know this article is not new to you, here I just do the collection of my memory for sharing you guys to better carry out those benchmark. Since with confidence in your tuned cluster through checking health is only by means of benchmarking and learn to parse the results. The above is what I'v done during several projects I ever did. And if later can I learn more, I will do more complementary. Hope you also do it well !