Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
avatar
Cloudera Employee

Hadoop release itself wraps typical testing tools, and there are many other popular benchmarks for testing Apache Hadoop. When it comes to HDP, some of the testing tool needs to be configured before can be used, and more need to revise the coding for it can work on HDP, for example in order have more metrics supported or in case of kerberos environment. This article introduce configuration or modification for testing tools working in HDP.

Let's start by going through typical service components form HDP, now I'm using HDP 2.6.1.0 with FreeIPA kerberized.

HBase - YCSB

we begin with HBase because of more coding revise effort in bechmarking tools. Besides using HBase in-build evaluation tools org.apache.hadoop.hbase.PerformanceEvaluation to evaluation throughput and read/write performance, more popular tool is YCSB, which you download from git://github.com/brianfrankcooper/YCSB.git

git clone git://github.com/brianfrankcooper/YCSB.git

before compiling we need to do following code modification before it runs in kerberos environment. Here I just quick hard-code the necessary configuration for it getting credential. Maybe you can put all those info as parameters following command 'java' or './ycsb' which exactly executes the benchmarking. However, for saving time on error-prone command typing, hard-code can help us on focusing tuning benchmarking parameters instead of those credential configurations.

//modify source code com.yahoo.ycsb.db.HBaseClient10.java
    //add
    config.set("hadoop.security.authentication", "Kerberos");
    config.set("hbase.zookeeper.quorum",
        "cheny0.field.hortonworks.com,cheny2.field.hortonworks.com,cheny1.field.hortonworks.com");
    config.set("zookeeper.znode.parent", "/hbase-secure");
    //add
    getProperties().setProperty("principal", "hbase/cheny0.field.hortonworks.com@FIELD.HORTONWORKS.COM");
    getProperties().setProperty("keytab", "/etc/security/keytabs/hbase.service.keytab");
    if ((getProperties().getProperty("principal")!=null)
        && (getProperties().getProperty("keytab")!=null) &&
        "kerberos".equalsIgnoreCase(config.get("hadoop.security.authentication"))) {
      try {
        UserGroupInformation.setConfiguration(config);
        UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI(
             getProperties().getProperty("principal"),
            getProperties().getProperty("keytab"));
        UserGroupInformation.setLoginUser(userGroupInformation);
      } catch (IOException e) {
        System.err.println("Keytab file is not readable or not found");
        throw new DBException(e);
      }
    }

And then remember aligned the dependent package with the same version it actually used in HDP. Be noted that pom.xml in sub-project 'hbase12', it references variable 'hbase12.version'

<dependency>  <groupId>org.apache.hbase</groupId>  <artifactId>hbase-shaded-client</artifactId>  <version>${hbase12.version}</version></dependency>

And then check the root pom, and modify the version to align with your HBase from HDP 2.6.1.0

<hbase12.version>1.1.2</hbase12.version>

Finally compiling to obtain executable jar file, and then launch it.

sh-3.2# cd /Users/chen.yang/workspace/YCSB 
sh-3.2# mvn -DskipTests package
sh-3.2# ssh cheny0
[root@cheny0 ~]# cd /tmp
[root@cheny0 tmp]# tar zxvf ycsb-hbase12-binding-0.13.0-SNAPSHOT.tar.gz 
[root@cheny0 tmp]# mv ycsb-hbase12-binding-0.13.0-SNAPSHOT /ycsb

//build conf dir
[root@cheny0 tmp]# mkdir -p /ycsb/hbase12-binding/conf
[root@cheny0 tmp]# scp /etc/hbase/conf/* ./

//execute load
[root@cheny0 tmp]# cd /ycsb/bin
[root@cheny0 tmp]# ./ycsb load hbase12 -P /ycsb/workloads/workloada -p columnfamily=f1 -p table=ycsb

//internally it translate into command
[root@cheny0 tmp]# java -Djavax.security.auth.useSubjectCredsOnly=false -cp "/etc/hbase/conf:/ycsb/lib/*" com.yahoo.ycsb.Client -db com.yahoo.ycsb.db.hbase12.HBaseClient12 -P /ycsb/workloads/workloada -p columnfamily=f1 -p table=ycsb -load

HBase - YRegionsStats.rb

This is not powerful testing tool, but its useful for help you arrange out data distribution status across regions, it help you get insight of data skew which intensively affect your Hbase application and services performance. Its a ruby script that can be downloaded form https://gist.github.com/nihed/f9ade8e6e8da7134aba4

The pity is it fails to runs at very first time with following error hints

[root@cheny0 tmp]# hbase org.jruby.Main RegionsStats.rb
NameError: cannot load Java class org.apache.hadoop.hbase.HServerLoadget_proxy_or_package_under_package at org/jruby/javasupport/JavaUtilities.java:54method_missing at file:/usr/hdp/2.6.1.0-129/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/java.rb:51(root) at RegionsStats.rb:15

The failure is because it try to load a deprecated class. So after deleting the 15th line, and it can work

//use ruby script to check balance
[root@cheny0 keytabs]# kinit -kt hbase.headless.keytab hbase-cheny@FIELD.HORTONWORKS.COM
[root@cheny0 keytabs]# cd /tmp
[root@cheny0 tmp]# hbase org.jruby.Main RegionsStats.rb
2017-08-24 00:49:18,816 INFO  [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
*******************************************************************
Hostname | regions count | regions size
cheny1.field.hortonworks.com | 3 | 691
cheny2.field.hortonworks.com | 3 | 323
cheny0.field.hortonworks.com | 3 | 325
*******************************************************************
0 | hbase:acl,,1501955861021.8b9119de437d98b0e5ffdf2475d1493b.
0 | hbase:meta,,1
0 | hbase:namespace,,1501952068264.30b243b85880dd40396d7940d6813605.
161 | TestTable,000000000000000000005244,1503383284564.22f23efda0244448340e62d2d457ae3b.
162 | TestTable,00000000000000000000654925,1503383284564.d71893801deac51ee626a5900320f2de.
162 | TestTable,00000000000000000000917303,1503383284403.912d9ab2d345097f51cf2c1a6e29b7b9.
163 | TestTable,0000000000000000000078582,1503383284403.0d5b67dfc510b9dd4e940116cc5509fa.
345 | TestTable,,1503159051542.63000b3e289f8d32ee37ce6632e3e0ef.
346 | TestTable,00000000000000000000261843,1503159051542.f34eea103da422ab90607a06eb4d350f.

Kafka - kafka-producer-perf-test.sh

Besides tuning disk I/O performance of Kafka broker, another pretty big influence on performance coms from producer and subscribe implementation. Here we only talk about producer, since for each partition it keeps maintaining a queue for accumulate message in order to save rpc with broker by sending in batch. With this the threshold for message sent out is highly affect the I/O performance. I recommend using patch from https://issues.apache.org/jira/browse/KAFKA-3554 Down the project and do compiling to get 'kafka-tools-1.0.0-SNAPSHOT.jar', then replace its counterpart in folder '/usr/hdp/current/kafka-broker/libs'. Before executing, do the following modification in kafka-producer-perf-test.sh

replace

exec $(dirname $0)/kafka-run-class.sh kafka.tools.ProducerPerformance

by

exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance

execute the benchmark

//for security testing
[kafka@cheny0 bin]$ kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/cheny0.field.hortonworks.com@FIELD.HORTONWORKS.COM
[root@cheny0 bin]$ ./kafka-producer-perf-test.sh --num-records 1000000 --record-size 1000 --topic test_topic --throughput 100000 --num-threads 2 --value-bound 50000 --print-metrics --producer-props bootstrap.servers=cheny0.field.hortonworks.com:6667 compression.type=gzip max.in.flight.requests.per.connection=1 linger.ms=5 security.protocol=SASL_PLAINTEXT

Its a really good tool for you to emulate and get sufficient metrics for studying the health. And there accompanies a bunch of equation to judge being healthy or not. For details, can be referred to youtube video https://www.youtube.com/watch?v=oQe7PpDDdzA and the slides in https://www.slideshare.net/JiangjieQin/producer-performance-tuning-for-apache-kafka-63147600?qid=ebe... And believe me, the beauty lies in small modulate can raise big difference, try to use it and get more appreciation.

HDFS and MapReduce

Here I just wanna refer you to http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stress-testing-an-hadoop-cluster-with-t... where has sufficient explain and tutorials. But one more thing should be alerted, the default testing scale is based on 1 terabytes, so if your cluster nodes has limited disk capacity, its strongly require you first considering decrease the value setting, and then do the benchmark. Or, it will ruin your good mood.

Okay guys, I know this article is not new to you, here I just do the collection of my memory for sharing you guys to better carry out those benchmark. Since with confidence in your tuned cluster through checking health is only by means of benchmarking and learn to parse the results. The above is what I'v done during several projects I ever did. And if later can I learn more, I will do more complementary. Hope you also do it well !

1,890 Views
Version history
Last update:
‎09-16-2022 01:41 AM
Updated by:
Contributors