Support Questions

Find answers, ask questions, and share your expertise

How to run nnbench on HDP 2.3.2?

avatar

In HDP 2.3.0 this command used to be able to run nnbench:

$ yarn jar /usr/hdp/2.3.0.0-2557/hadoop-hdfs/hadoop-hdfs-tests.jar nnbench –operation create_write

In HDP 2.3.2 it seems nnbench is no longer part of hadoop-hdfs-tests.jar?

# yarn jar /usr/hdp/2.3.2.0-2950/hadoop-hdfs/hadoop-hdfs-tests.jar nnbench –operation create_write
Exception in thread "main" java.lang.ClassNotFoundException: nnbench
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:278)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)


# jar -tvf /usr/hdp/2.3.2.0-2950/hadoop-hdfs/hadoop-hdfs-tests.jar | grep bench
#

Tried running a search in /usr for jars containing nnbench class but did not get any results

find /usr -iname '*.jar' | xargs -i bash -c "jar -tvf {} | tr / . | grep  nnbench && echo {}"

Has this class been renamed or depracated?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Ali Bajwa

I see its part of the code link

search

[root@phdns01 hadoop]# yarn jar /usr/hdp/2.3.4.0-3276/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar nnbench –operation create_write

NameNode Benchmark 0.4

15/12/03 19:50:23 INFO hdfs.NNBench: Test Inputs:

15/12/03 19:50:23 INFO hdfs.NNBench: Test Operation: none

15/12/03 19:50:23 INFO hdfs.NNBench: Start time: 2015-12-03 19:52:23,879

15/12/03 19:50:23 INFO hdfs.NNBench: Number of maps: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Number of reduces: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Block Size: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Bytes to write: 0

15/12/03 19:50:23 INFO hdfs.NNBench: Bytes per checksum: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Number of files: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Replication factor: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Base dir: /benchmarks/NNBench

15/12/03 19:50:23 INFO hdfs.NNBench: Read file after open: false

Error: Unknown operation: none

Usage: nnbench <options>

Options:

-operation <Available operations are create_write open_read rename delete. This option is mandatory>

* NOTE: The open_read, rename and delete operations assume that the files they operate on, are already available. The create_write operation must be run before running the other operations.

-maps <number of maps. default is 1. This is not mandatory>

-reduces <number of reduces. default is 1. This is not mandatory>

-startTime <time to start, given in seconds from the epoch. Make sure this is far enough into the future, so all maps (operations) will start at the same time. default is launch time + 2 mins. This is not mandatory>

-blockSize <Block size in bytes. default is 1. This is not mandatory>

-bytesToWrite <Bytes to write. default is 0. This is not mandatory>

-bytesPerChecksum <Bytes per checksum for the files. default is 1. This is not mandatory>

-numberOfFiles <number of files to create. default is 1. This is not mandatory>

-replicationFactorPerFile <Replication factor for the files. default is 1. This is not mandatory>

-baseDir <base DFS path. default is /becnhmarks/NNBench. This is not mandatory>

-readFileAfterOpen <true or false. if true, it reads the file and reports the average time to read. This is valid with the open_read operation. default is false. This is not mandatory>

-help: Display the help statement

[root@phdns01 hadoop]#

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@Ali Bajwa

I see its part of the code link

search

[root@phdns01 hadoop]# yarn jar /usr/hdp/2.3.4.0-3276/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar nnbench –operation create_write

NameNode Benchmark 0.4

15/12/03 19:50:23 INFO hdfs.NNBench: Test Inputs:

15/12/03 19:50:23 INFO hdfs.NNBench: Test Operation: none

15/12/03 19:50:23 INFO hdfs.NNBench: Start time: 2015-12-03 19:52:23,879

15/12/03 19:50:23 INFO hdfs.NNBench: Number of maps: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Number of reduces: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Block Size: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Bytes to write: 0

15/12/03 19:50:23 INFO hdfs.NNBench: Bytes per checksum: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Number of files: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Replication factor: 1

15/12/03 19:50:23 INFO hdfs.NNBench: Base dir: /benchmarks/NNBench

15/12/03 19:50:23 INFO hdfs.NNBench: Read file after open: false

Error: Unknown operation: none

Usage: nnbench <options>

Options:

-operation <Available operations are create_write open_read rename delete. This option is mandatory>

* NOTE: The open_read, rename and delete operations assume that the files they operate on, are already available. The create_write operation must be run before running the other operations.

-maps <number of maps. default is 1. This is not mandatory>

-reduces <number of reduces. default is 1. This is not mandatory>

-startTime <time to start, given in seconds from the epoch. Make sure this is far enough into the future, so all maps (operations) will start at the same time. default is launch time + 2 mins. This is not mandatory>

-blockSize <Block size in bytes. default is 1. This is not mandatory>

-bytesToWrite <Bytes to write. default is 0. This is not mandatory>

-bytesPerChecksum <Bytes per checksum for the files. default is 1. This is not mandatory>

-numberOfFiles <number of files to create. default is 1. This is not mandatory>

-replicationFactorPerFile <Replication factor for the files. default is 1. This is not mandatory>

-baseDir <base DFS path. default is /becnhmarks/NNBench. This is not mandatory>

-readFileAfterOpen <true or false. if true, it reads the file and reports the average time to read. This is valid with the open_read operation. default is false. This is not mandatory>

-help: Display the help statement

[root@phdns01 hadoop]#

avatar
Master Mentor

@Ali Bajwa

[hdfs@phdns01 ~]$ hadoop jar /usr/hdp/2.3.4.0-3276/hadoop-mapreduce/hadoop-mapreduce-client-jobclient.jar nnbench -operation create_write -maps 1 -reduces 1 -blockSize 1 -bytesToWrite 0 -numberOfFiles 1000 -replicationFactorPerFile 3 -readFileAfterOpen true -baseDir /benchmarks/NNBench

WARNING: Use "yarn jar" to launch YARN applications.

NameNode Benchmark 0.4

15/12/03 19:53:11 INFO hdfs.NNBench: Test Inputs:

15/12/03 19:53:11 INFO hdfs.NNBench: Test Operation: create_write

15/12/03 19:53:11 INFO hdfs.NNBench: Start time: 2015-12-03 19:55:11,636

avatar

Thanks! I will change my script to use the current dir so the jar location remains same across releases

yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar nnbench -operation create_write