Member since
04-10-2019
33
Posts
0
Kudos Received
0
Solutions
06-15-2019
08:04 AM
Hi @Jay Kumar SenSharma. I just checked my FQDN and they are all correct : master.rh.bigdata.cluster node2.rh.bigdata.cluster node3.rh.bigdata.cluster node4.rh.bigdata.cluster Still got the error !
... View more
06-14-2019
08:18 AM
hi @Jay Kumar SenSharma, thats the output of ps -ef | grep -i zookeeper from master.rh.bigdata.cluster : root@RHBigData1:~# ps -ef | grep -i zookeeper
root 11171 11148 0 09:58 pts/0 00:00:00 grep --color=auto -i zookeeper
zookeep+ 17902 1 0 Jun13 ? 00:04:15 /usr/jdk64/jdk1.8.0_112/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-RHBigData1.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.10.5.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg thats the output of root@RHBigData1:~# netstat -tnlpa | grep 2181 from master.rh.bigdata.cluster : root@RHBigData1:~# netstat -tnlpa | grep 2181
tcp6 0 0 :::2181 :::* LISTEN 17902/java
tcp6 0 0 172.16.138.156:2181 172.16.138.156:59142 TIME_WAIT - i cant understand why the service iptable is not loaded : root@RHBigData1:~# service iptables stop
Failed to stop iptables.service: Unit iptables.service not loaded. from another host "node4" which is my cluster master : root@node4:~# telnet master.rh.bigdata.cluster 2181
Trying 172.16.138.156...
Connected to master.rh.bigdata.cluster.
Escape character is '^]'.
Connection closed by foreign host. but when i try the manuel test, it still not working ! same error !
root@node4:~# /var/lib/ambari-agent/tmp/zkSmoke.sh /usr/hdp/current/zookeeper-client/bin/zkCli.sh ambari-qa /usr/hdp/current/zookeeper-client/conf 2181 False kinit no_keytab no_principal /var/lib/ambari-agent/tmp/zkSmoke.out
zk_node1=master.rh.bigdata.cluster
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:708)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:596)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:368)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:328)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:287)
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:703)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:596)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:368)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:328)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:287)
Running test on host master.rh.bigdata.cluster
Connecting to master.rh.bigdata.cluster:2181
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Welcome to ZooKeeper!
JLine support is enabled
[zk: master.rh.bigdata.cluster:2181(CONNECTING) 0] get /zk_smoketest
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:722)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:596)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:368)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:328)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:287)
Connecting to master.rh.bigdata.cluster:2181
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
... View more
06-13-2019
05:47 PM
Hello , My zookeeper is working good on my 4 nodes. but when i start a checking it failed and i got this error on my stderr: Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of '/var/lib/ambari-agent/tmp/zkSmoke.sh /usr/hdp/current/zookeeper-client/bin/zkCli.sh ambari-qa /usr/hdp/current/zookeeper-client/conf 2181 False kinit no_keytab no_principal /var/lib/ambari-agent/tmp/zkSmoke.out' returned 4. zk_node1=master.rh.bigdata.cluster
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /zk_smoketest
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at org.apache.zookeeper.ZooKeeperMain.processZKCmd(ZooKeeperMain.java:708)
at org.apache.zookeeper.ZooKeeperMain.processCmd(ZooKeeperMain.java:596)
at org.apache.zookeeper.ZooKeeperMain.executeLine(ZooKeeperMain.java:368)
at org.apache.zookeeper.ZooKeeperMain.run(ZooKeeperMain.java:328)
at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:287) I also belive that this issue affect other services !
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
06-13-2019
09:41 AM
hello evrybody, I recently added yarn and hbase to my HDP 3.1.0 cluster. i have some issues with hbase and my TIMELINE SERVICE V2.0 READER. Yarn alerts shows : ATSv2 HBase Application ats-hbase service information could not be retrieved HBASE MASTER shutdown after passing to standby mode and after a succes start :
... View more
Labels:
06-11-2019
10:49 AM
I cant solve the problem of connections between hosts caused by Errno:111 !! My : hostname is right, iptables is stoped, selinux is stoped. safemode is off on the namenode Here is the datanode log file : root@node4:~# cat /var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log
2019-06-11 12:30:42,269 INFO datanode.DataNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = node4.rh.bigdata.cluster/172.16.138.113
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T13:34Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-06-11 12:30:42,315 INFO datanode.DataNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-11 12:30:45,017 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data
2019-06-11 12:30:45,692 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-06-11 12:30:46,151 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-06-11 12:30:46,152 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - DataNode metrics system started
2019-06-11 12:30:47,749 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-11 12:30:47,768 INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576
2019-06-11 12:30:47,812 INFO datanode.DataNode (DataNode.java:<init>(486)) - File descriptor passing is enabled.
2019-06-11 12:30:47,817 INFO datanode.DataNode (DataNode.java:<init>(499)) - Configured hostname is node4.rh.bigdata.cluster
2019-06-11 12:30:47,818 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-11 12:30:47,840 INFO datanode.DataNode (DataNode.java:startDataNode(1399)) - Starting DataNode with maxLockedMemory = 0
2019-06-11 12:30:48,025 INFO datanode.DataNode (DataNode.java:initDataXceiver(1147)) - Opened streaming server at /0.0.0.0:50010
2019-06-11 12:30:48,040 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-06-11 12:30:48,041 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-06-11 12:30:48,060 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-06-11 12:30:48,065 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-06-11 12:30:48,067 INFO datanode.DataNode (DataNode.java:initDataXceiver(1165)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2019-06-11 12:30:48,369 INFO util.log (Log.java:initialized(192)) - Logging initialized @8550ms
2019-06-11 12:30:48,918 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:30:48,935 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined
2019-06-11 12:30:48,951 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-11 12:30:48,960 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context datanode
2019-06-11 12:30:48,960 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-06-11 12:30:48,961 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-06-11 12:30:48,961 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-06-11 12:30:49,047 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 37607
2019-06-11 12:30:49,052 INFO server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T19:11:56+02:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-11 12:30:49,393 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:30:49,403 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@4ff8d125{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-06-11 12:30:49,404 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@3403e2ac{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-06-11 12:30:49,710 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@47f9738{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/datanode/,AVAILABLE}{/datanode}
2019-06-11 12:30:49,744 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@6f3c660a{HTTP/1.1,[http/1.1]}{localhost:37607}
2019-06-11 12:30:49,745 INFO server.Server (Server.java:doStart(419)) - Started @9927ms
2019-06-11 12:30:50,537 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(255)) - Listening HTTP traffic on /0.0.0.0:50075
2019-06-11 12:30:50,593 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-06-11 12:30:50,673 INFO datanode.DataNode (DataNode.java:startDataNode(1427)) - dnUserName = hdfs
2019-06-11 12:30:50,674 INFO datanode.DataNode (DataNode.java:startDataNode(1428)) - supergroup = hdfs
2019-06-11 12:30:50,842 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-06-11 12:30:50,896 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8010
2019-06-11 12:30:52,002 INFO datanode.DataNode (DataNode.java:initIpcServer(1033)) - Opened IPC server at /0.0.0.0:8010
2019-06-11 12:30:52,052 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
2019-06-11 12:30:52,086 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(210)) - Starting BPOfferServices for nameservices: <default>
2019-06-11 12:30:52,124 INFO datanode.DataNode (BPServiceActor.java:run(810)) - Block pool <registering> (Datanode Uuid unassigned) service to node4.rh.bigdata.cluster/172.16.138.113:8020 starting to offer service
2019-06-11 12:30:52,184 INFO ipc.Server (Server.java:run(1153)) - IPC Server listener on 8010: starting
2019-06-11 12:30:52,226 INFO ipc.Server (Server.java:run(1314)) - IPC Server Responder: starting
2019-06-11 12:30:52,777 INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to node4.rh.bigdata.cluster/172.16.138.113:8020
2019-06-11 12:30:52,788 INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(354)) - Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-06-11 12:30:52,817 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/data/in_use.lock acquired by nodename 1800@node4.rh.bigdata.cluster
2019-06-11 12:30:52,832 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data
java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:736)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-06-11 12:30:52,843 ERROR datanode.DataNode (BPServiceActor.java:run(829)) - Initialization failed for Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71) service to node4.rh.bigdata.cluster/172.16.138.113:8020. Exiting.
java.io.IOException: All specified directories have failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:552)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-06-11 12:30:52,843 WARN datanode.DataNode (BPServiceActor.java:run(853)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71) service to node4.rh.bigdata.cluster/172.16.138.113:8020
2019-06-11 12:30:52,950 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71)
2019-06-11 12:30:54,951 WARN datanode.DataNode (DataNode.java:secureMain(2890)) - Exiting Datanode
2019-06-11 12:30:54,968 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at node4.rh.bigdata.cluster/172.16.138.113
************************************************************/ here is the namenode log file too : root@node4:~# cat /var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log
2019-06-11 12:30:42,269 INFO datanode.DataNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = node4.rh.bigdata.cluster/172.16.138.113
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T13:34Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-06-11 12:30:42,315 INFO datanode.DataNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-11 12:30:45,017 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data
2019-06-11 12:30:45,692 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-06-11 12:30:46,151 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-06-11 12:30:46,152 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - DataNode metrics system started
2019-06-11 12:30:47,749 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-11 12:30:47,768 INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576
2019-06-11 12:30:47,812 INFO datanode.DataNode (DataNode.java:<init>(486)) - File descriptor passing is enabled.
2019-06-11 12:30:47,817 INFO datanode.DataNode (DataNode.java:<init>(499)) - Configured hostname is node4.rh.bigdata.cluster
2019-06-11 12:30:47,818 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-11 12:30:47,840 INFO datanode.DataNode (DataNode.java:startDataNode(1399)) - Starting DataNode with maxLockedMemory = 0
2019-06-11 12:30:48,025 INFO datanode.DataNode (DataNode.java:initDataXceiver(1147)) - Opened streaming server at /0.0.0.0:50010
2019-06-11 12:30:48,040 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-06-11 12:30:48,041 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-06-11 12:30:48,060 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-06-11 12:30:48,065 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-06-11 12:30:48,067 INFO datanode.DataNode (DataNode.java:initDataXceiver(1165)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2019-06-11 12:30:48,369 INFO util.log (Log.java:initialized(192)) - Logging initialized @8550ms
2019-06-11 12:30:48,918 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:30:48,935 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined
2019-06-11 12:30:48,951 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-11 12:30:48,960 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context datanode
2019-06-11 12:30:48,960 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-06-11 12:30:48,961 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-06-11 12:30:48,961 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-06-11 12:30:49,047 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 37607
2019-06-11 12:30:49,052 INFO server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T19:11:56+02:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-11 12:30:49,393 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:30:49,403 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@4ff8d125{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-06-11 12:30:49,404 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@3403e2ac{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-06-11 12:30:49,710 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@47f9738{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/datanode/,AVAILABLE}{/datanode}
2019-06-11 12:30:49,744 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@6f3c660a{HTTP/1.1,[http/1.1]}{localhost:37607}
2019-06-11 12:30:49,745 INFO server.Server (Server.java:doStart(419)) - Started @9927ms
2019-06-11 12:30:50,537 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(255)) - Listening HTTP traffic on /0.0.0.0:50075
2019-06-11 12:30:50,593 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-06-11 12:30:50,673 INFO datanode.DataNode (DataNode.java:startDataNode(1427)) - dnUserName = hdfs
2019-06-11 12:30:50,674 INFO datanode.DataNode (DataNode.java:startDataNode(1428)) - supergroup = hdfs
2019-06-11 12:30:50,842 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-06-11 12:30:50,896 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8010
2019-06-11 12:30:52,002 INFO datanode.DataNode (DataNode.java:initIpcServer(1033)) - Opened IPC server at /0.0.0.0:8010
2019-06-11 12:30:52,052 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
2019-06-11 12:30:52,086 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(210)) - Starting BPOfferServices for nameservices: <default>
2019-06-11 12:30:52,124 INFO datanode.DataNode (BPServiceActor.java:run(810)) - Block pool <registering> (Datanode Uuid unassigned) service to node4.rh.bigdata.cluster/172.16.138.113:8020 starting to offer service
2019-06-11 12:30:52,184 INFO ipc.Server (Server.java:run(1153)) - IPC Server listener on 8010: starting
2019-06-11 12:30:52,226 INFO ipc.Server (Server.java:run(1314)) - IPC Server Responder: starting
2019-06-11 12:30:52,777 INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to node4.rh.bigdata.cluster/172.16.138.113:8020
2019-06-11 12:30:52,788 INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(354)) - Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-06-11 12:30:52,817 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/data/in_use.lock acquired by nodename 1800@node4.rh.bigdata.cluster
2019-06-11 12:30:52,832 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data
java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:736)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-06-11 12:30:52,843 ERROR datanode.DataNode (BPServiceActor.java:run(829)) - Initialization failed for Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71) service to node4.rh.bigdata.cluster/172.16.138.113:8020. Exiting.
java.io.IOException: All specified directories have failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:552)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-06-11 12:30:52,843 WARN datanode.DataNode (BPServiceActor.java:run(853)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71) service to node4.rh.bigdata.cluster/172.16.138.113:8020
2019-06-11 12:30:52,950 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool <registering> (Datanode Uuid 746ac7cf-9f82-4411-8b37-3a41f1a64e71)
2019-06-11 12:30:54,951 WARN datanode.DataNode (DataNode.java:secureMain(2890)) - Exiting Datanode
2019-06-11 12:30:54,968 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at node4.rh.bigdata.cluster/172.16.138.113
************************************************************/ i have 9 alerts on the Ambari UI , here is the DataNode Process alerts : and DataNode Web UI alerts : and Percent DataNodes Available alert :
... View more
Labels:
06-11-2019
10:17 AM
@Jay Kumar SenSharma safemode is off on all my nodes including the namenode node. # /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get
Safe mode is OFF Now that my namenode start succefully , my datanodes dies after a succefull start and i have this errors... root@node4:~# cat /var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log
2019-06-11 12:09:51,630 INFO namenode.NameNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node4.rh.bigdata.cluster/172.16.138.113
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T13:34Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-06-11 12:09:51,672 INFO namenode.NameNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-11 12:09:52,024 INFO namenode.NameNode (NameNode.java:createNameNode(1583)) - createNameNode []
2019-06-11 12:09:52,507 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-06-11 12:09:52,992 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-06-11 12:09:52,992 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2019-06-11 12:09:53,167 INFO namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://node4.rh.bigdata.cluster:8020
2019-06-11 12:09:53,168 INFO namenode.NameNode (NameNode.java:<init>(928)) - Clients should use node4.rh.bigdata.cluster:8020 to access this namenode/service.
2019-06-11 12:09:53,848 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-06-11 12:09:53,964 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1605)) - Starting Web-server for hdfs at: http://node4.rh.bigdata.cluster:50070
2019-06-11 12:09:54,065 INFO util.log (Log.java:initialized(192)) - Logging initialized @5526ms
2019-06-11 12:09:54,570 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:09:54,638 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined
2019-06-11 12:09:54,683 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-11 12:09:54,699 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context hdfs
2019-06-11 12:09:54,700 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-06-11 12:09:54,719 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-06-11 12:09:54,726 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-06-11 12:09:54,820 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(100)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-06-11 12:09:54,821 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(787)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-06-11 12:09:54,865 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 50070
2019-06-11 12:09:54,869 INFO server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T19:11:56+02:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-11 12:09:55,057 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-11 12:09:55,068 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2b30a42c{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-06-11 12:09:55,071 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@359df09a{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-06-11 12:09:55,565 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@1169afe1{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-06-11 12:09:55,590 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@13d9cbf5{HTTP/1.1,[http/1.1]}{node4.rh.bigdata.cluster:50070}
2019-06-11 12:09:55,590 INFO server.Server (Server.java:doStart(419)) - Started @7052ms
2019-06-11 12:09:56,256 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-11 12:09:56,257 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-11 12:09:56,258 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(680)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-11 12:09:56,258 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(685)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-11 12:09:56,277 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-11 12:09:56,277 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-11 12:09:56,333 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(223)) - set restore failed storage to true
2019-06-11 12:09:56,422 INFO namenode.FSEditLog (FSEditLog.java:newInstance(227)) - Edit logging is async:true
2019-06-11 12:09:56,490 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(749)) - KeyProvider: null
2019-06-11 12:09:56,490 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(752)) - Enabling async auditlog
2019-06-11 12:09:56,499 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(122)) - fsLock is fair: false
2019-06-11 12:09:56,500 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(138)) - Detailed lock hold time metrics enabled: false
2019-06-11 12:09:56,528 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(774)) - fsOwner = hdfs (auth:SIMPLE)
2019-06-11 12:09:56,528 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(775)) - supergroup = hdfs
2019-06-11 12:09:56,529 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - isPermissionEnabled = true
2019-06-11 12:09:56,530 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(787)) - HA Enabled: false
2019-06-11 12:09:56,651 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(84)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2019-06-11 12:09:56,657 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-11 12:09:56,715 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-06-11 12:09:56,715 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2019-06-11 12:09:56,725 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(79)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2019-06-11 12:09:56,726 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(85)) - The block deletion will start around 2019 juin 11 13:09:56
2019-06-11 12:09:56,730 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2019-06-11 12:09:56,730 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-11 12:09:56,735 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1011.3 MB = 20.2 MB
2019-06-11 12:09:56,735 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries
2019-06-11 12:09:56,765 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(579)) - dfs.block.access.token.enable = true
2019-06-11 12:09:56,765 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(601)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2019-06-11 12:09:56,887 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 1.0
2019-06-11 12:09:56,887 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0
2019-06-11 12:09:56,888 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 30000
2019-06-11 12:09:56,888 INFO blockmanagement.BlockManager (BlockManager.java:<init>(565)) - defaultReplication = 3
2019-06-11 12:09:56,888 INFO blockmanagement.BlockManager (BlockManager.java:<init>(566)) - maxReplication = 50
2019-06-11 12:09:56,888 INFO blockmanagement.BlockManager (BlockManager.java:<init>(567)) - minReplication = 1
2019-06-11 12:09:56,889 INFO blockmanagement.BlockManager (BlockManager.java:<init>(568)) - maxReplicationStreams = 2
2019-06-11 12:09:56,889 INFO blockmanagement.BlockManager (BlockManager.java:<init>(569)) - redundancyRecheckInterval = 3000ms
2019-06-11 12:09:56,889 INFO blockmanagement.BlockManager (BlockManager.java:<init>(570)) - encryptDataTransfer = false
2019-06-11 12:09:56,889 INFO blockmanagement.BlockManager (BlockManager.java:<init>(571)) - maxNumBlocksToLog = 1000
2019-06-11 12:09:57,019 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2019-06-11 12:09:57,020 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-11 12:09:57,020 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1011.3 MB = 10.1 MB
2019-06-11 12:09:57,020 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^20 = 1048576 entries
2019-06-11 12:09:57,023 INFO namenode.FSDirectory (FSDirectory.java:<init>(287)) - ACLs enabled? true
2019-06-11 12:09:57,023 INFO namenode.FSDirectory (FSDirectory.java:<init>(291)) - POSIX ACL inheritance enabled? true
2019-06-11 12:09:57,025 INFO namenode.FSDirectory (FSDirectory.java:<init>(295)) - XAttrs enabled? true
2019-06-11 12:09:57,026 INFO namenode.NameNode (FSDirectory.java:<init>(359)) - Caching file names occurring more than 10 times
2019-06-11 12:09:57,043 INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(124)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-06-11 12:09:57,050 INFO snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2019-06-11 12:09:57,070 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2019-06-11 12:09:57,071 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-11 12:09:57,071 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1011.3 MB = 2.5 MB
2019-06-11 12:09:57,072 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^18 = 262144 entries
2019-06-11 12:09:57,105 INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-06-11 12:09:57,106 INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2019-06-11 12:09:57,106 INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-06-11 12:09:57,115 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(983)) - Retry cache on namenode is enabled
2019-06-11 12:09:57,115 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-06-11 12:09:57,121 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2019-06-11 12:09:57,122 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-11 12:09:57,122 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1011.3 MB = 310.7 KB
2019-06-11 12:09:57,122 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^15 = 32768 entries
2019-06-11 12:09:57,163 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 27750@node4.rh.bigdata.cluster
2019-06-11 12:09:57,233 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(396)) - Recovering unfinalized segments in /hadoop/hdfs/namenode/current
2019-06-11 12:09:57,344 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /hadoop/hdfs/namenode/current/edits_inprogress_0000000000000000025 -> /hadoop/hdfs/namenode/current/edits_0000000000000000025-0000000000000000025
2019-06-11 12:09:57,388 INFO namenode.FSImage (FSImage.java:loadFSImageFile(782)) - Planning to load image: FSImageFile(file=/hadoop/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2019-06-11 12:09:57,685 INFO namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(266)) - Loading 1 INodes.
2019-06-11 12:09:57,807 INFO namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(190)) - Loaded FSImage in 0 seconds.
2019-06-11 12:09:57,808 INFO namenode.FSImage (FSImage.java:loadFSImage(951)) - Loaded image for txid 0 from /hadoop/hdfs/namenode/current/fsimage_0000000000000000000
2019-06-11 12:09:57,809 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@20f12539 expecting start txid #1
2019-06-11 12:09:57,809 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000001-0000000000000000008 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:57,816 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000001-0000000000000000008' to transaction ID 1
2019-06-11 12:09:58,566 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000001-0000000000000000008 of size 1048576 edits # 8 loaded in 0 seconds
2019-06-11 12:09:58,567 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@75b25825 expecting start txid #9
2019-06-11 12:09:58,567 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000009-0000000000000000010 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,568 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000009-0000000000000000010' to transaction ID 1
2019-06-11 12:09:58,569 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000009-0000000000000000010 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,569 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@18025ced expecting start txid #11
2019-06-11 12:09:58,570 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000011-0000000000000000012 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,571 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000011-0000000000000000012' to transaction ID 1
2019-06-11 12:09:58,574 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000011-0000000000000000012 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,575 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@13cf7d52 expecting start txid #13
2019-06-11 12:09:58,575 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000013-0000000000000000014 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,575 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000013-0000000000000000014' to transaction ID 1
2019-06-11 12:09:58,576 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000013-0000000000000000014 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,577 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3a3e4aff expecting start txid #15
2019-06-11 12:09:58,579 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000015-0000000000000000016 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,581 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000015-0000000000000000016' to transaction ID 1
2019-06-11 12:09:58,582 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000015-0000000000000000016 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,583 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5d2a4eed expecting start txid #17
2019-06-11 12:09:58,583 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000017-0000000000000000018 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,583 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000017-0000000000000000018' to transaction ID 1
2019-06-11 12:09:58,585 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000017-0000000000000000018 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,585 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@57459491 expecting start txid #19
2019-06-11 12:09:58,585 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000019-0000000000000000020 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,586 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000019-0000000000000000020' to transaction ID 1
2019-06-11 12:09:58,587 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000019-0000000000000000020 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,587 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@3f0846c6 expecting start txid #21
2019-06-11 12:09:58,588 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000021-0000000000000000022 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,588 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000021-0000000000000000022' to transaction ID 1
2019-06-11 12:09:58,589 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000021-0000000000000000022 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,589 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@77a98a6a expecting start txid #23
2019-06-11 12:09:58,589 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000023-0000000000000000024 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,590 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000023-0000000000000000024' to transaction ID 1
2019-06-11 12:09:58,590 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000023-0000000000000000024 of size 42 edits # 2 loaded in 0 seconds
2019-06-11 12:09:58,590 INFO namenode.FSImage (FSImage.java:loadEdits(887)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@78fbff54 expecting start txid #25
2019-06-11 12:09:58,591 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(158)) - Start loading edits file /hadoop/hdfs/namenode/current/edits_0000000000000000025-0000000000000000025 maxTxnsToRead = 9223372036854775807
2019-06-11 12:09:58,591 INFO namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/hadoop/hdfs/namenode/current/edits_0000000000000000025-0000000000000000025' to transaction ID 1
2019-06-11 12:09:58,594 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(162)) - Edits file /hadoop/hdfs/namenode/current/edits_0000000000000000025-0000000000000000025 of size 1048576 edits # 1 loaded in 0 seconds
2019-06-11 12:09:58,594 INFO namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1095)) - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2019-06-11 12:09:58,596 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1361)) - Starting log segment at 26
2019-06-11 12:09:58,837 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2019-06-11 12:09:58,840 INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(721)) - Finished loading FSImage in 1710 msecs
2019-06-11 12:09:59,471 INFO namenode.NameNode (NameNodeRpcServer.java:<init>(446)) - RPC server is binding to node4.rh.bigdata.cluster:8020
2019-06-11 12:09:59,503 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 10000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-06-11 12:09:59,539 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8020
2019-06-11 12:10:00,285 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(5004)) - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
2019-06-11 12:10:00,299 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-11 12:10:00,345 INFO namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(171)) - Number of blocks under construction: 0
2019-06-11 12:10:00,372 INFO block.BlockTokenSecretManager (BlockTokenSecretManager.java:updateKeys(240)) - Updating block keys
2019-06-11 12:10:00,383 INFO blockmanagement.BlockManager (BlockManager.java:initializeReplQueues(4764)) - initializing replication queues
2019-06-11 12:10:00,384 INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(396)) - STATE* Leaving safe mode after 0 secs
2019-06-11 12:10:00,385 INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(402)) - STATE* Network topology has 0 racks and 0 datanodes
2019-06-11 12:10:00,385 INFO hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(404)) - STATE* UnderReplicatedBlocks has 0 blocks
2019-06-11 12:10:00,412 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3451)) - Total number of blocks = 0
2019-06-11 12:10:00,414 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3452)) - Number of invalid blocks = 0
2019-06-11 12:10:00,414 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3453)) - Number of under-replicated blocks = 0
2019-06-11 12:10:00,414 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3454)) - Number of over-replicated blocks = 0
2019-06-11 12:10:00,414 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3456)) - Number of blocks being written = 0
2019-06-11 12:10:00,415 INFO hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3459)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 30 msec
2019-06-11 12:10:00,543 INFO ipc.Server (Server.java:run(1314)) - IPC Server Responder: starting
2019-06-11 12:10:00,552 INFO ipc.Server (Server.java:run(1153)) - IPC Server listener on 8020: starting
2019-06-11 12:10:00,911 INFO namenode.NameNode (NameNode.java:startCommonServices(812)) - NameNode RPC up at: node4.rh.bigdata.cluster/172.16.138.113:8020
2019-06-11 12:10:00,941 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1207)) - Starting services required for active state
2019-06-11 12:10:00,942 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(774)) - Initializing quota with 4 thread(s)
2019-06-11 12:10:00,975 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(783)) - Quota initialization completed in 32 milliseconds
name space=4
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
2019-06-11 12:10:00,995 INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor with interval 30000 milliseconds
2019-06-11 12:10:02,731 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(228)) - The configured checkpoint interval is 0 minutes. Using an interval of 360 minutes that is used for deletion instead
2019-06-11 12:10:02,732 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(235)) - Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
2019-06-11 12:10:09,660 INFO namenode.FSNamesystem (FSNamesystem.java:rollEditLog(4663)) - Roll Edit Log from 172.16.138.146
2019-06-11 12:10:09,661 INFO namenode.FSEditLog (FSEditLog.java:rollEditLog(1314)) - Rolling edit logs
2019-06-11 12:10:09,662 INFO namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1406)) - Ending log segment 26, 26
2019-06-11 12:10:09,665 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(774)) - Number of transactions: 2 Total time for transactions(ms): 4 Number of transactions batched in Syncs: 25 Number of syncs: 3 SyncTimes(ms): 13
2019-06-11 12:10:09,669 INFO namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /hadoop/hdfs/namenode/current/edits_inprogress_0000000000000000026 -> /hadoop/hdfs/namenode/current/edits_0000000000000000026-0000000000000000027
2019-06-11 12:10:09,670 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(1361)) - Starting log segment at 28
root@node4:~# On my ambari UI i see that 111 Errno of failed connexion . Thanks for your help.
... View more
06-11-2019
09:47 AM
Hi @Jay Kumar SenSharma Please, how can i change the owner of those directories from root to hdfs ? rhbigdata@node4:~$ ls -lart /hadoop/hdfs/namenode/current/VERSION
-rw-r--r-- 1 root hadoop 219 mai 28 16:17 /hadoop/hdfs/namenode/current/VERSION
rhbigdata@node4:~$ ls -lart /hadoop/hdfs/namenode/current/
total 24
-rw-r--r-- 1 root hadoop 219 mai 28 16:17 VERSION
-rw-r--r-- 1 root hadoop 2 mai 28 16:17 seen_txid
-rw-r--r-- 1 root hadoop 383 mai 28 16:17 fsimage_0000000000000000000
-rw-r--r-- 1 root hadoop 62 mai 28 16:17 fsimage_0000000000000000000.md5
drwxr-xr-x 2 root hadoop 4096 mai 28 16:17 .
drwxr-xr-x 4 hdfs hadoop 4096 juin 7 10:29 ..
rhbigdata@node4:~$ ls -lart /hadoop/hdfs/namenode/
total 16
drwxr-xr-x 5 root hadoop 4096 mai 28 14:03 ..
drwxr-xr-x 2 root hadoop 4096 mai 28 14:11 namenode-formatted
drwxr-xr-x 2 root hadoop 4096 mai 28 16:17 current
drwxr-xr-x 4 hdfs hadoop 4096 juin 7 10:29 .
... View more
06-11-2019
08:52 AM
Jay Kumar SenSharma The outPut for the first point is : root@node4:~# netstat -tnlpa | grep 8020
root@node4:~# hostname -f
node4.rh.bigdata.cluster
root@node4:~# cat /etc/hosts
127.0.0.1 localhost
172.16.138.113 node4.rh.bigdata.cluster RHBigData4
172.16.138.156 master.rh.bigdata.cluster RHBigData1
172.16.138.145 node2.rh.bigdata.cluster RHBigData2
172.16.138.146 node3.rh.bigdata.cluster RHBigData3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@node4:~# ifconfig
ens3 Link encap:Ethernet HWaddr 52:54:00:35:2f:cf
inet addr:172.16.138.113 Bcast:172.16.138.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe35:2fcf/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11104634 errors:0 dropped:11 overruns:0 frame:0
TX packets:5804287 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9620875073 (9.6 GB) TX bytes:686215476 (686.2 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:13030620 errors:0 dropped:0 overruns:0 frame:0
TX packets:13030620 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:2175003977 (2.1 GB) TX bytes:2175003977 (2.1 GB)
root@node4:~# /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get
safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused the log file "/var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log" is attached to this answer. 2019-06-07 09:49:30,957 INFO namenode.NameNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node4.rh.bigdata.cluster/172.16.138.113
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T13:34Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-06-07 09:49:31,006 INFO namenode.NameNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-07 09:49:31,443 INFO namenode.NameNode (NameNode.java:createNameNode(1583)) - createNameNode []
2019-06-07 09:49:32,022 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-06-07 09:49:32,626 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-06-07 09:49:32,627 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2019-06-07 09:49:32,851 INFO namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://node4.rh.bigdata.cluster:8020
2019-06-07 09:49:32,852 INFO namenode.NameNode (NameNode.java:<init>(928)) - Clients should use node4.rh.bigdata.cluster:8020 to access this namenode/service.
2019-06-07 09:49:33,523 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-06-07 09:49:33,626 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1605)) - Starting Web-server for hdfs at: http://node4.rh.bigdata.cluster:50070
2019-06-07 09:49:33,703 INFO util.log (Log.java:initialized(192)) - Logging initialized @6651ms
2019-06-07 09:49:34,258 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-07 09:49:34,328 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined
2019-06-07 09:49:34,384 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-07 09:49:34,404 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context hdfs
2019-06-07 09:49:34,405 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-06-07 09:49:34,424 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-06-07 09:49:34,424 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-06-07 09:49:34,562 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(100)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-06-07 09:49:34,563 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(787)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-06-07 09:49:34,628 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 50070
2019-06-07 09:49:34,633 INFO server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T19:11:56+02:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-07 09:49:34,949 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-07 09:49:34,971 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2b30a42c{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-06-07 09:49:34,975 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@359df09a{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-06-07 09:49:35,733 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@1169afe1{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-06-07 09:49:35,766 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@2d913e3f{HTTP/1.1,[http/1.1]}{node4.rh.bigdata.cluster:50070}
2019-06-07 09:49:35,767 INFO server.Server (Server.java:doStart(419)) - Started @8715ms
2019-06-07 09:49:36,864 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 09:49:36,866 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 09:49:36,866 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(680)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-07 09:49:36,867 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(685)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-07 09:49:36,894 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 09:49:36,895 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 09:49:36,969 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(223)) - set restore failed storage to true
2019-06-07 09:49:37,072 INFO namenode.FSEditLog (FSEditLog.java:newInstance(227)) - Edit logging is async:true
2019-06-07 09:49:37,139 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(749)) - KeyProvider: null
2019-06-07 09:49:37,142 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(752)) - Enabling async auditlog
2019-06-07 09:49:37,155 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(122)) - fsLock is fair: false
2019-06-07 09:49:37,157 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(138)) - Detailed lock hold time metrics enabled: false
2019-06-07 09:49:37,191 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(774)) - fsOwner = hdfs (auth:SIMPLE)
2019-06-07 09:49:37,191 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(775)) - supergroup = hdfs
2019-06-07 09:49:37,191 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - isPermissionEnabled = true
2019-06-07 09:49:37,192 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(787)) - HA Enabled: false
2019-06-07 09:49:37,328 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(84)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2019-06-07 09:49:37,334 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-07 09:49:37,450 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-06-07 09:49:37,450 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2019-06-07 09:49:37,462 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(79)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2019-06-07 09:49:37,464 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(85)) - The block deletion will start around 2019 juin 07 10:49:37
2019-06-07 09:49:37,469 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2019-06-07 09:49:37,469 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 09:49:37,476 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1011.3 MB = 20.2 MB
2019-06-07 09:49:37,476 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries
2019-06-07 09:49:37,506 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(579)) - dfs.block.access.token.enable = true
2019-06-07 09:49:37,507 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(601)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2019-06-07 09:49:37,690 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 1.0
2019-06-07 09:49:37,690 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0
2019-06-07 09:49:37,690 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 30000
2019-06-07 09:49:37,691 INFO blockmanagement.BlockManager (BlockManager.java:<init>(565)) - defaultReplication = 3
2019-06-07 09:49:37,691 INFO blockmanagement.BlockManager (BlockManager.java:<init>(566)) - maxReplication = 50
2019-06-07 09:49:37,691 INFO blockmanagement.BlockManager (BlockManager.java:<init>(567)) - minReplication = 1
2019-06-07 09:49:37,692 INFO blockmanagement.BlockManager (BlockManager.java:<init>(568)) - maxReplicationStreams = 2
2019-06-07 09:49:37,692 INFO blockmanagement.BlockManager (BlockManager.java:<init>(569)) - redundancyRecheckInterval = 3000ms
2019-06-07 09:49:37,742 INFO blockmanagement.BlockManager (BlockManager.java:<init>(570)) - encryptDataTransfer = false
2019-06-07 09:49:37,742 INFO blockmanagement.BlockManager (BlockManager.java:<init>(571)) - maxNumBlocksToLog = 1000
2019-06-07 09:49:37,877 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2019-06-07 09:49:37,877 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 09:49:37,877 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1011.3 MB = 10.1 MB
2019-06-07 09:49:37,878 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^20 = 1048576 entries
2019-06-07 09:49:37,881 INFO namenode.FSDirectory (FSDirectory.java:<init>(287)) - ACLs enabled? true
2019-06-07 09:49:37,881 INFO namenode.FSDirectory (FSDirectory.java:<init>(291)) - POSIX ACL inheritance enabled? true
2019-06-07 09:49:37,881 INFO namenode.FSDirectory (FSDirectory.java:<init>(295)) - XAttrs enabled? true
2019-06-07 09:49:37,881 INFO namenode.NameNode (FSDirectory.java:<init>(359)) - Caching file names occurring more than 10 times
2019-06-07 09:49:37,898 INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(124)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-06-07 09:49:37,905 INFO snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2019-06-07 09:49:37,920 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2019-06-07 09:49:37,921 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 09:49:37,921 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1011.3 MB = 2.5 MB
2019-06-07 09:49:37,921 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^18 = 262144 entries
2019-06-07 09:49:37,942 INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-06-07 09:49:37,942 INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2019-06-07 09:49:37,942 INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-06-07 09:49:37,950 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(983)) - Retry cache on namenode is enabled
2019-06-07 09:49:37,950 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-06-07 09:49:37,956 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2019-06-07 09:49:37,957 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 09:49:37,957 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1011.3 MB = 310.7 KB
2019-06-07 09:49:37,957 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^15 = 32768 entries
2019-06-07 09:49:37,993 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 27296@node4.rh.bigdata.cluster
2019-06-07 09:49:37,999 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading fsimage
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-06-07 09:49:38,011 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@1169afe1{/,null,UNAVAILABLE}{/hdfs}
2019-06-07 09:49:38,025 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@2d913e3f{HTTP/1.1,[http/1.1]}{node4.rh.bigdata.cluster:50070}
2019-06-07 09:49:38,026 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@359df09a{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-06-07 09:49:38,026 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@2b30a42c{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-06-07 09:49:38,032 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-06-07 09:49:38,034 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-06-07 09:49:38,034 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-06-07 09:49:38,034 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-06-07 09:49:38,038 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-06-07 09:49:38,047 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node4.rh.bigdata.cluster/172.16.138.113
************************************************************/
2019-06-07 10:29:39,894 INFO namenode.NameNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node4.rh.bigdata.cluster/172.16.138.113
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.1.1.3.1.0.0-78
STARTUP_MSG: classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T13:34Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2019-06-07 10:29:39,940 INFO namenode.NameNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-06-07 10:29:40,347 INFO namenode.NameNode (NameNode.java:createNameNode(1583)) - createNameNode []
2019-06-07 10:29:40,872 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-06-07 10:29:41,343 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-06-07 10:29:41,344 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2019-06-07 10:29:41,595 INFO namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://node4.rh.bigdata.cluster:8020
2019-06-07 10:29:41,595 INFO namenode.NameNode (NameNode.java:<init>(928)) - Clients should use node4.rh.bigdata.cluster:8020 to access this namenode/service.
2019-06-07 10:29:42,306 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-06-07 10:29:42,449 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1605)) - Starting Web-server for hdfs at: http://node4.rh.bigdata.cluster:50070
2019-06-07 10:29:42,559 INFO util.log (Log.java:initialized(192)) - Logging initialized @5789ms
2019-06-07 10:29:43,109 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-07 10:29:43,193 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined
2019-06-07 10:29:43,271 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-06-07 10:29:43,297 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context hdfs
2019-06-07 10:29:43,302 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-06-07 10:29:43,308 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-06-07 10:29:43,309 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-06-07 10:29:43,478 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(100)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-06-07 10:29:43,479 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(787)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-06-07 10:29:43,566 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 50070
2019-06-07 10:29:43,572 INFO server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T19:11:56+02:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-06-07 10:29:43,755 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-06-07 10:29:43,768 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2b30a42c{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-06-07 10:29:43,771 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@359df09a{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-06-07 10:29:44,437 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@1169afe1{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-06-07 10:29:44,475 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@13d9cbf5{HTTP/1.1,[http/1.1]}{node4.rh.bigdata.cluster:50070}
2019-06-07 10:29:44,476 INFO server.Server (Server.java:doStart(419)) - Started @7707ms
2019-06-07 10:29:45,583 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 10:29:45,584 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 10:29:45,584 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(680)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-07 10:29:45,585 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(685)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-06-07 10:29:45,608 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 10:29:45,609 WARN common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-06-07 10:29:45,696 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(223)) - set restore failed storage to true
2019-06-07 10:29:45,826 INFO namenode.FSEditLog (FSEditLog.java:newInstance(227)) - Edit logging is async:true
2019-06-07 10:29:45,899 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(749)) - KeyProvider: null
2019-06-07 10:29:45,900 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(752)) - Enabling async auditlog
2019-06-07 10:29:45,913 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(122)) - fsLock is fair: false
2019-06-07 10:29:45,915 INFO namenode.FSNamesystem (FSNamesystemLock.java:<init>(138)) - Detailed lock hold time metrics enabled: false
2019-06-07 10:29:45,945 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(774)) - fsOwner = hdfs (auth:SIMPLE)
2019-06-07 10:29:45,945 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(775)) - supergroup = hdfs
2019-06-07 10:29:45,945 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - isPermissionEnabled = true
2019-06-07 10:29:45,946 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(787)) - HA Enabled: false
2019-06-07 10:29:46,092 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(84)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2019-06-07 10:29:46,099 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-06-07 10:29:46,162 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-06-07 10:29:46,163 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2019-06-07 10:29:46,179 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(79)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2019-06-07 10:29:46,180 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(85)) - The block deletion will start around 2019 juin 07 11:29:46
2019-06-07 10:29:46,186 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2019-06-07 10:29:46,186 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 10:29:46,194 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1011.3 MB = 20.2 MB
2019-06-07 10:29:46,194 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^21 = 2097152 entries
2019-06-07 10:29:46,230 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(579)) - dfs.block.access.token.enable = true
2019-06-07 10:29:46,230 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(601)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2019-06-07 10:29:46,507 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 1.0
2019-06-07 10:29:46,507 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0
2019-06-07 10:29:46,507 INFO blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 30000
2019-06-07 10:29:46,508 INFO blockmanagement.BlockManager (BlockManager.java:<init>(565)) - defaultReplication = 3
2019-06-07 10:29:46,508 INFO blockmanagement.BlockManager (BlockManager.java:<init>(566)) - maxReplication = 50
2019-06-07 10:29:46,508 INFO blockmanagement.BlockManager (BlockManager.java:<init>(567)) - minReplication = 1
2019-06-07 10:29:46,508 INFO blockmanagement.BlockManager (BlockManager.java:<init>(568)) - maxReplicationStreams = 2
2019-06-07 10:29:46,509 INFO blockmanagement.BlockManager (BlockManager.java:<init>(569)) - redundancyRecheckInterval = 3000ms
2019-06-07 10:29:46,509 INFO blockmanagement.BlockManager (BlockManager.java:<init>(570)) - encryptDataTransfer = false
2019-06-07 10:29:46,512 INFO blockmanagement.BlockManager (BlockManager.java:<init>(571)) - maxNumBlocksToLog = 1000
2019-06-07 10:29:46,631 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2019-06-07 10:29:46,632 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 10:29:46,632 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1011.3 MB = 10.1 MB
2019-06-07 10:29:46,632 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^20 = 1048576 entries
2019-06-07 10:29:46,635 INFO namenode.FSDirectory (FSDirectory.java:<init>(287)) - ACLs enabled? true
2019-06-07 10:29:46,635 INFO namenode.FSDirectory (FSDirectory.java:<init>(291)) - POSIX ACL inheritance enabled? true
2019-06-07 10:29:46,636 INFO namenode.FSDirectory (FSDirectory.java:<init>(295)) - XAttrs enabled? true
2019-06-07 10:29:46,636 INFO namenode.NameNode (FSDirectory.java:<init>(359)) - Caching file names occurring more than 10 times
2019-06-07 10:29:46,658 INFO snapshot.SnapshotManager (SnapshotManager.java:<init>(124)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-06-07 10:29:46,665 INFO snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2019-06-07 10:29:46,677 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2019-06-07 10:29:46,678 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 10:29:46,678 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1011.3 MB = 2.5 MB
2019-06-07 10:29:46,678 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^18 = 262144 entries
2019-06-07 10:29:46,702 INFO metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-06-07 10:29:46,702 INFO metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2019-06-07 10:29:46,702 INFO metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-06-07 10:29:46,711 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(983)) - Retry cache on namenode is enabled
2019-06-07 10:29:46,711 INFO namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-06-07 10:29:46,717 INFO util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2019-06-07 10:29:46,717 INFO util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type = 64-bit
2019-06-07 10:29:46,718 INFO util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1011.3 MB = 310.7 KB
2019-06-07 10:29:46,718 INFO util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity = 2^15 = 32768 entries
2019-06-07 10:29:46,753 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 12904@node4.rh.bigdata.cluster
2019-06-07 10:29:46,758 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading fsimage
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-06-07 10:29:46,772 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@1169afe1{/,null,UNAVAILABLE}{/hdfs}
2019-06-07 10:29:46,792 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@13d9cbf5{HTTP/1.1,[http/1.1]}{node4.rh.bigdata.cluster:50070}
2019-06-07 10:29:46,793 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@359df09a{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-06-07 10:29:46,794 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@2b30a42c{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-06-07 10:29:46,806 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-06-07 10:29:46,810 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-06-07 10:29:46,810 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-06-07 10:29:46,811 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-06-07 10:29:46,815 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-06-07 10:29:46,823 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node4.rh.bigdata.cluster/172.16.138.113
************************************************************/
I cant understund the warning in the log file. the output of the safemode get is this error ! root@node4:~# /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get
safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused still need your help please 🙂
... View more
06-07-2019
10:14 PM
@Geoffrey Shelton Okot Hello Community, I cant solve the problem of connections between hosts caused by Errno:111 !! My : hostname is right, iptables is stoped, selinux is stoped. Here is the log of starting the name node of my cluster. Mysafemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:14,531 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:32,086 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:48,653 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:08:06,283 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:08:22,948 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefuse
... View more
Labels:
06-07-2019
06:14 PM
Hello Community, I cant solve the problem of connections between hosts caused by Errno:111 !! My : hostname is right, iptables is stoped, selinux is stoped. Here is the log of starting the name node of my cluster. Mysafemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:14,531 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:32,086 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:07:48,653 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:08:06,283 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2019-06-07 10:08:22,948 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefuse
... View more
Labels:
06-06-2019
11:06 AM
Hi @Vinay , @Geoffrey Shelton Okot, Any updates or solutions for this problem ? Thank You
... View more
05-29-2019
01:48 PM
Thank You @Bill Brooks
... View more
05-29-2019
01:20 PM
hi @Geoffrey Shelton Okot, that's the log when i try to start Timeline Service V2.0 Reader . stderr: /var/lib/ambari-agent/data/errors-358.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 119, in <module> ApplicationTimelineReader().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 45, in start self.configure(env) # FOR SECURITY File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 73, in configure configure_hbase(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/hbase_service.py", line 89, in configure_hbase owner=params.yarn_hbase_user File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 677, in action_create_on_execute self.action_delayed("create") File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 674, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 373, in action_delayed self.action_delayed_for_nameservice(None, action_name, main_resource) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 395, in action_delayed_for_nameservice self._assert_valid() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 334, in _assert_valid self.target_status = self._get_file_status(target) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 497, in _get_file_status list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 214, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282, in _run_command _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/get_user_call_output.py", line 62, in get_user_call_output raise ExecutionFailed(err_msg, code, files_output[0], files_output[1]) resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET -d '' -H 'Content-Length: 0' 'http://master.rh.bigdata.cluster:50070/webhdfs/v1/atsv2/hbase/data?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpFiFkPa 2>/tmp/tmp9jUaKL' returned 7. curl: (7) Failed to connect to master.rh.bigdata.cluster port 50070: Connection refused 000 stdout: /var/lib/ambari-agent/data/output-358.txt 2019-05-29 11:50:12,000 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-05-29 11:50:12,040 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-29 11:50:12,646 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-05-29 11:50:12,661 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-29 11:50:12,665 - Group['hdfs'] {} 2019-05-29 11:50:12,670 - Group['hadoop'] {} 2019-05-29 11:50:12,670 - Group['users'] {} 2019-05-29 11:50:12,671 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,674 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,677 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,679 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,681 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-05-29 11:50:12,682 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2019-05-29 11:50:12,684 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2019-05-29 11:50:12,686 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,688 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,690 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2019-05-29 11:50:12,691 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-05-29 11:50:12,694 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2019-05-29 11:50:12,752 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2019-05-29 11:50:12,760 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2019-05-29 11:50:12,771 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-05-29 11:50:12,777 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-05-29 11:50:12,778 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2019-05-29 11:50:12,875 - call returned (0, '1031') 2019-05-29 11:50:12,888 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1031'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2019-05-29 11:50:12,953 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1031'] due to not_if 2019-05-29 11:50:12,967 - Group['hdfs'] {} 2019-05-29 11:50:12,975 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2019-05-29 11:50:12,981 - FS Type: HDFS 2019-05-29 11:50:12,982 - Directory['/etc/hadoop'] {'mode': 0755} 2019-05-29 11:50:13,047 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-05-29 11:50:13,050 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2019-05-29 11:50:13,206 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2019-05-29 11:50:13,310 - Skipping Execute[('setenforce', '0')] due to not_if 2019-05-29 11:50:13,318 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2019-05-29 11:50:13,345 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2019-05-29 11:50:13,350 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2019-05-29 11:50:13,351 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:13,375 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2019-05-29 11:50:13,380 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2019-05-29 11:50:13,397 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:13,424 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-05-29 11:50:13,426 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2019-05-29 11:50:13,427 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2019-05-29 11:50:13,437 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:13,509 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2019-05-29 11:50:13,559 - Skipping unlimited key JCE policy check and setup since it is not required 2019-05-29 11:50:14,790 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-29 11:50:14,792 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-05-29 11:50:14,836 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-29 11:50:14,857 - Directory['/var/log/hadoop-yarn'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0775, 'owner': 'yarn'} 2019-05-29 11:50:14,862 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,863 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,864 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:14,865 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,866 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,868 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,870 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:14,871 - Directory['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase'] {'owner': 'yarn-ats', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:14,873 - Directory['/var/run/hadoop-yarn-hbase'] {'owner': 'yarn-ats', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,873 - Creating directory Directory['/var/run/hadoop-yarn-hbase'] since it doesn't exist. 2019-05-29 11:50:14,874 - Changing owner for /var/run/hadoop-yarn-hbase from 0 to yarn-ats 2019-05-29 11:50:14,875 - Directory['/var/run/hadoop-yarn-hbase/yarn-ats'] {'owner': 'yarn-ats', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:14,875 - Creating directory Directory['/var/run/hadoop-yarn-hbase/yarn-ats'] since it doesn't exist. 2019-05-29 11:50:14,876 - Changing owner for /var/run/hadoop-yarn-hbase/yarn-ats from 0 to yarn-ats 2019-05-29 11:50:14,877 - Directory['/var/log/hadoop-yarn/embedded-yarn-ats-hbase'] {'owner': 'yarn-ats', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:14,877 - Creating directory Directory['/var/log/hadoop-yarn/embedded-yarn-ats-hbase'] since it doesn't exist. 2019-05-29 11:50:14,878 - Changing owner for /var/log/hadoop-yarn/embedded-yarn-ats-hbase from 0 to yarn-ats 2019-05-29 11:50:14,879 - Directory['/tmp'] {'create_parents': True, 'cd_access': 'a'} 2019-05-29 11:50:14,879 - Execute[('chmod', '1777', u'/tmp')] {'sudo': True} 2019-05-29 11:50:14,948 - XmlConfig['hbase-policy.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn-ats', 'configurations': {u'security.admin.protocol.acl': u'*', u'security.masterregion.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}} 2019-05-29 11:50:15,008 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-policy.xml 2019-05-29 11:50:15,009 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-policy.xml'] {'owner': 'yarn-ats', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,017 - Writing File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-policy.xml'] because it doesn't exist 2019-05-29 11:50:15,018 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-policy.xml from 0 to yarn-ats 2019-05-29 11:50:15,030 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn-ats', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:15,032 - Writing File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-env.sh'] because it doesn't exist 2019-05-29 11:50:15,032 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-env.sh from 0 to yarn-ats 2019-05-29 11:50:15,040 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase_grant_permissions.sh'] {'content': Template('yarn_hbase_grant_permissions.j2'), 'owner': 'yarn-ats', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:15,041 - Writing File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase_grant_permissions.sh'] because it doesn't exist 2019-05-29 11:50:15,042 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase_grant_permissions.sh from 0 to yarn-ats 2019-05-29 11:50:15,057 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'yarn-ats', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:15,058 - Writing File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/log4j.properties'] because it doesn't exist 2019-05-29 11:50:15,058 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/log4j.properties from 0 to yarn-ats 2019-05-29 11:50:15,069 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties.j2'), 'owner': 'yarn-ats', 'group': 'hadoop'} 2019-05-29 11:50:15,070 - Writing File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hadoop-metrics2-hbase.properties'] because it doesn't exist 2019-05-29 11:50:15,070 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hadoop-metrics2-hbase.properties from 0 to yarn-ats 2019-05-29 11:50:15,084 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-05-29 11:50:15,085 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json 2019-05-29 11:50:15,086 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-yarn.json'] {'content': Template('input.config-yarn.json.j2'), 'mode': 0644} 2019-05-29 11:50:15,088 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:15,113 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml 2019-05-29 11:50:15,114 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,179 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:15,198 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml 2019-05-29 11:50:15,198 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,273 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2019-05-29 11:50:15,287 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml 2019-05-29 11:50:15,288 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,365 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml from 1030 to yarn 2019-05-29 11:50:15,366 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'hadoop.registry.dns.bind-port': u'true'}}, 'owner': 'yarn', 'configurations': ...} 2019-05-29 11:50:15,382 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/yarn-site.xml 2019-05-29 11:50:15,383 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,655 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2019-05-29 11:50:15,673 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml 2019-05-29 11:50:15,674 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,703 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml from 1028 to yarn 2019-05-29 11:50:15,704 - XmlConfig['hbase-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn-ats', 'configurations': ...} 2019-05-29 11:50:15,724 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml 2019-05-29 11:50:15,725 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml'] {'owner': 'yarn-ats', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,794 - XmlConfig['resource-types.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': {u'yarn.resource-types.yarn.io_gpu.maximum-allocation': u'8', u'yarn.resource-types': u''}} 2019-05-29 11:50:15,810 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/resource-types.xml 2019-05-29 11:50:15,811 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/resource-types.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:15,818 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2019-05-29 11:50:15,823 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2019-05-29 11:50:15,848 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2019-05-29 11:50:15,854 - File['/usr/hdp/3.1.0.0-78/hadoop-yarn/bin/container-executor'] {'group': 'hadoop', 'mode': 02050} 2019-05-29 11:50:15,871 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/container-executor.cfg'] {'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:15,872 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-05-29 11:50:15,880 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755} 2019-05-29 11:50:15,883 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2019-05-29 11:50:15,888 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2019-05-29 11:50:15,890 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'mapred', 'configurations': ...} 2019-05-29 11:50:15,914 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml 2019-05-29 11:50:15,915 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:16,043 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/mapred-site.xml from 1029 to mapred 2019-05-29 11:50:16,044 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:16,068 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml 2019-05-29 11:50:16,069 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:16,092 - Changing owner for /usr/hdp/3.1.0.0-78/hadoop/conf/capacity-scheduler.xml from 1029 to hdfs 2019-05-29 11:50:16,093 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:16,109 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml 2019-05-29 11:50:16,109 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:16,119 - Directory['/usr/hdp/3.1.0.0-78/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-05-29 11:50:16,121 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:16,137 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml 2019-05-29 11:50:16,138 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:16,147 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2019-05-29 11:50:16,162 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml 2019-05-29 11:50:16,163 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-05-29 11:50:16,175 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:16,177 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop', 'mode': 0644} 2019-05-29 11:50:16,185 - HdfsResource['/atsv2/hbase/data'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://master.rh.bigdata.cluster:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn-ats', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']} 2019-05-29 11:50:16,205 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://master.rh.bigdata.cluster:50070/webhdfs/v1/atsv2/hbase/data?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpFiFkPa 2>/tmp/tmp9jUaKL''] {'logoutput': None, 'quiet': False} 2019-05-29 11:50:16,417 - call returned (7, '')
Command failed after 1 tries
... View more
05-29-2019
10:05 AM
that's the log when i try to start Timeline Service V2.0 Reader ==> LOG.txt
... View more
05-29-2019
09:29 AM
Yes all HDP preparation are done:
... View more
05-29-2019
08:33 AM
Hi @Vinay, wich log files you want me to share ? /var/log/ ? after failled i got Alert on Ambari : the hostname -f in evry host give the exact FQDN in the capture, is all my issues caused by this error 111 ? Merci
... View more
05-28-2019
12:59 PM
Hello, I just installed HDP-3.1.0.0 and amber 2.7.3.0, while creating my cluster with the ambari wizard, I chose the 3.1.0 version knowing that my VM's are running on Ubuntu 16.04. The installation is successfully done, but warning shows up showing that services can't start! here an example of during installation: and a little bit after: After complete with the wizard I tried to start manually those services (data, node in one of my hosts, for example, and it work for seconds before turning off). Can anyone help?
... View more
Labels:
05-10-2019
07:04 PM
mysql> SELECT table_name, table_schema, engine FROM information_schema.tables; +---------------------------------------+--------------------+--------+
| table_name | table_schema | engine |
+---------------------------------------+--------------------+--------+
| CHARACTER_SETS | information_schema | MEMORY |
| COLLATIONS | information_schema | MEMORY |
| COLLATION_CHARACTER_SET_APPLICABILITY | information_schema | MEMORY |
| COLUMNS | information_schema | InnoDB |
| COLUMN_PRIVILEGES | information_schema | MEMORY |
| ENGINES | information_schema | MEMORY |
| EVENTS | information_schema | InnoDB |
| FILES | information_schema | MEMORY |
| GLOBAL_STATUS | information_schema | MEMORY |
| GLOBAL_VARIABLES | information_schema | MEMORY |
| KEY_COLUMN_USAGE | information_schema | MEMORY |
| OPTIMIZER_TRACE | information_schema | InnoDB |
| PARAMETERS | information_schema | InnoDB |
| PARTITIONS | information_schema | InnoDB |
| PLUGINS | information_schema | InnoDB |
| PROCESSLIST | information_schema | InnoDB |
| PROFILING | information_schema | MEMORY |
| REFERENTIAL_CONSTRAINTS | information_schema | MEMORY |
| ROUTINES | information_schema | InnoDB |
| SCHEMATA | information_schema | MEMORY |
| SCHEMA_PRIVILEGES | information_schema | MEMORY |
| SESSION_STATUS | information_schema | MEMORY |
| SESSION_VARIABLES | information_schema | MEMORY |
| STATISTICS | information_schema | MEMORY |
| TABLES | information_schema | MEMORY |
| TABLESPACES | information_schema | MEMORY |
| TABLE_CONSTRAINTS | information_schema | MEMORY |
| TABLE_PRIVILEGES | information_schema | MEMORY |
| TRIGGERS | information_schema | InnoDB |
| USER_PRIVILEGES | information_schema | MEMORY |
| VIEWS | information_schema | InnoDB |
| INNODB_LOCKS | information_schema | MEMORY |
| INNODB_TRX | information_schema | MEMORY |
| INNODB_SYS_DATAFILES | information_schema | MEMORY |
| INNODB_FT_CONFIG | information_schema | MEMORY |
| INNODB_SYS_VIRTUAL | information_schema | MEMORY |
| INNODB_CMP | information_schema | MEMORY |
| INNODB_FT_BEING_DELETED | information_schema | MEMORY |
| INNODB_CMP_RESET | information_schema | MEMORY |
| INNODB_CMP_PER_INDEX | information_schema | MEMORY |
| INNODB_CMPMEM_RESET | information_schema | MEMORY |
| INNODB_FT_DELETED | information_schema | MEMORY |
| INNODB_BUFFER_PAGE_LRU | information_schema | MEMORY |
| INNODB_LOCK_WAITS | information_schema | MEMORY |
| INNODB_TEMP_TABLE_INFO | information_schema | MEMORY |
| INNODB_SYS_INDEXES | information_schema | MEMORY |
| INNODB_SYS_TABLES | information_schema | MEMORY |
| INNODB_SYS_FIELDS | information_schema | MEMORY |
| INNODB_CMP_PER_INDEX_RESET | information_schema | MEMORY |
| INNODB_BUFFER_PAGE | information_schema | MEMORY |
| INNODB_FT_DEFAULT_STOPWORD | information_schema | MEMORY |
| INNODB_FT_INDEX_TABLE | information_schema | MEMORY |
| INNODB_FT_INDEX_CACHE | information_schema | MEMORY |
| INNODB_SYS_TABLESPACES | information_schema | MEMORY |
| INNODB_METRICS | information_schema | MEMORY |
| INNODB_SYS_FOREIGN_COLS | information_schema | MEMORY |
| INNODB_CMPMEM | information_schema | MEMORY |
| INNODB_BUFFER_POOL_STATS | information_schema | MEMORY |
| INNODB_SYS_COLUMNS | information_schema | MEMORY |
| INNODB_SYS_FOREIGN | information_schema | MEMORY |
| INNODB_SYS_TABLESTATS | information_schema | MEMORY |
| columns_priv | mysql | MyISAM |
| db | mysql | MyISAM |
| engine_cost | mysql | InnoDB |
| event | mysql | MyISAM |
| func | mysql | MyISAM |
| general_log | mysql | CSV |
| gtid_executed | mysql | InnoDB |
| help_category | mysql | InnoDB |
| help_keyword | mysql | InnoDB |
| help_relation | mysql | InnoDB |
| help_topic | mysql | InnoDB |
| innodb_index_stats | mysql | InnoDB |
| innodb_table_stats | mysql | InnoDB |
| ndb_binlog_index | mysql | MyISAM |
| plugin | mysql | InnoDB |
| proc | mysql | MyISAM |
| procs_priv | mysql | MyISAM |
| proxies_priv | mysql | MyISAM |
| server_cost | mysql | InnoDB |
| servers | mysql | InnoDB |
| slave_master_info | mysql | InnoDB |
| slave_relay_log_info | mysql | InnoDB |
| slave_worker_info | mysql | InnoDB |
| slow_log | mysql | CSV |
| tables_priv | mysql | MyISAM |
| time_zone | mysql | InnoDB |
| time_zone_leap_second | mysql | InnoDB |
| time_zone_name | mysql | InnoDB |
| time_zone_transition | mysql | InnoDB |
| time_zone_transition_type | mysql | InnoDB |
| user | mysql | MyISAM |
+---------------------------------------+--------------------+--------+
92 rows in set (0.01 sec)
... View more
05-10-2019
07:03 PM
Hi Geoffrey Shelton Okot , I dont see hive in my table-shema ? is that the problem ? +---------------------------------------+--------------------+--------+
| table_name | table_schema | engine |
+---------------------------------------+--------------------+--------+
| CHARACTER_SETS | information_schema | MEMORY |
| PLUGINS | information_schema | InnoDB |
| PROCESSLIST | information_schema | InnoDB |
| PROFILING | information_schema | MEMORY |
| REFERENTIAL_CONSTRAINTS | information_schema | MEMORY |
| ROUTINES | information_schema | InnoDB |
| SCHEMATA | information_schema | MEMORY |
| SCHEMA_PRIVILEGES | information_schema | MEMORY |
| SESSION_STATUS | information_schema | MEMORY |
| SESSION_VARIABLES | information_schema | MEMORY |
| STATISTICS | information_schema | MEMORY |
| TABLES | information_schema | MEMORY |
| TABLESPACES | information_schema | MEMORY |
| TABLE_CONSTRAINTS | information_schema | MEMORY |
| TABLE_PRIVILEGES | information_schema | MEMORY |
| TRIGGERS | information_schema | InnoDB |
| USER_PRIVILEGES | information_schema | MEMORY |
| VIEWS | information_schema | InnoDB |
| INNODB_LOCKS | information_schema | MEMORY |
| INNODB_FT_CONFIG | information_schema | MEMORY |
| INNODB_SYS_VIRTUAL | information_schema | MEMORY |
| INNODB_CMP | information_schema | MEMORY |
| INNODB_FT_BEING_DELETED | information_schema | MEMORY |
| INNODB_CMP_RESET | information_schema | MEMORY |
| INNODB_CMP_PER_INDEX | information_schema | MEMORY |
| INNODB_CMPMEM_RESET | information_schema | MEMORY |
| INNODB_FT_DELETED | information_schema | MEMORY |
| INNODB_BUFFER_PAGE_LRU | information_schema | MEMORY |
| INNODB_LOCK_WAITS | information_schema | MEMORY |
| INNODB_TEMP_TABLE_INFO | information_schema | MEMORY |
| INNODB_SYS_INDEXES | information_schema | MEMORY |
| INNODB_SYS_TABLES | information_schema | MEMORY |
| INNODB_SYS_FIELDS | information_schema | MEMORY |
| INNODB_CMP_PER_INDEX_RESET | information_schema | MEMORY |
| INNODB_BUFFER_PAGE | information_schema | MEMORY |
| INNODB_FT_DEFAULT_STOPWORD | information_schema | MEMORY |
| INNODB_FT_INDEX_TABLE | information_schema | MEMORY |
| INNODB_FT_INDEX_CACHE | information_schema | MEMORY |
| INNODB_SYS_TABLESPACES | information_schema | MEMORY |
| INNODB_METRICS | information_schema | MEMORY |
| INNODB_SYS_FOREIGN_COLS | information_schema | MEMORY |
| INNODB_CMPMEM | information_schema | MEMORY |
| INNODB_BUFFER_POOL_STATS | information_schema | MEMORY |
| INNODB_SYS_COLUMNS | information_schema | MEMORY |
| INNODB_SYS_FOREIGN | information_schema | MEMORY |
| INNODB_SYS_TABLESTATS | information_schema | MEMORY |
| columns_priv | mysql | MyISAM |
| db | mysql | MyISAM |
| engine_cost | mysql | InnoDB |
| event | mysql | MyISAM |
| func | mysql | MyISAM |
| general_log | mysql | CSV |
| gtid_executed | mysql | InnoDB |
| help_category | mysql | InnoDB |
| help_keyword | mysql | InnoDB |
| help_relation | mysql | InnoDB |
| help_topic | mysql | InnoDB |
| innodb_index_stats | mysql | InnoDB |
| innodb_table_stats | mysql | InnoDB |
| ndb_binlog_index | mysql | MyISAM |
| plugin | mysql | InnoDB |
| proc | mysql | MyISAM |
| procs_priv | mysql | MyISAM |
| proxies_priv | mysql | MyISAM |
| server_cost | mysql | InnoDB |
| servers | mysql | InnoDB |
| slave_master_info | mysql | InnoDB |
| slave_relay_log_info | mysql | InnoDB |
| slave_worker_info | mysql | InnoDB |
| slow_log | mysql | CSV |
| tables_priv | mysql | MyISAM |
| time_zone | mysql | InnoDB |
| time_zone_leap_second | mysql | InnoDB |
| time_zone_name | mysql | InnoDB |
| time_zone_transition | mysql | InnoDB |
| time_zone_transition_type | mysql | InnoDB |
| user | mysql | MyISAM |
+---------------------------------------+--------------------+--------+
92 rows in set (0.02 sec)
... View more
05-10-2019
07:02 PM
Hi Geoffrey Shelton Okot , Yeah i dont understund why it doesnt work !! I tried all the solutions but still got the same error as my first publication, Here is my Os details: ?root@RHBigData1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial? Here is my hiveserver2.log error: java.sql.SQLException: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://master.rh.bigdata.cluster/hive?createDatabaseIfNotExist=true, username = hive. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLSyntaxErrorException: Access denied for user 'hive'@'master.rh.bigdata.cluster' to database 'hive'
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:832)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:296)
at sun.reflect.GeneratedConstructorAccessor71.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
.
.
.
.
.
Caused by: java.sql.SQLSyntaxErrorException: Access denied for user 'hive'@'master.rh.bigdata.cluster' to database 'hive'
or my hive@master.rh.bigdata.cluster User have all the privileges on this database??
... View more
05-10-2019
01:09 PM
here is my hiveserver2.log : 2019-05-07 14:23:05,902 ERROR [main]: Datastore.Schema (Log4JLogger.java:error(125)) - Failed initialising database.
Unable to open a test connection to the given database. JDBC url = jdbc:mysql://master.rh.bigdata.cluster/hive?createDatabaseIfNotExist=true, username = hive. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:832)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:296)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
at org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
... View more
05-10-2019
01:09 PM
Hello yeah i tried all the solution. its weird !!! I have this : NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial I still get the same error i posted at first !
... View more
05-10-2019
01:09 PM
Hi Geoffrey Shelton Okot, Yeah i tried with an existing database, didnt work too , i tried with the ip adress and with localhost too ! the error is the same ! PS:i cant see ur screenshot
... View more
05-10-2019
01:09 PM
Hi Geoffrey Shelton Okot, Yeah i tried with an existing database, didnt work too , i tried with the ip adress and with localhost too ! the error is the same ! PS:i cant see ur screenshot
... View more
05-10-2019
01:09 PM
hello, Mysql-connector-java is updated ambari-server driver and db : still have the warning on my ambari-server start and finally the failed connection rom ambari : and here's the stderr & stdout : stderr: 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 530, in <module> CheckHost().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute method(env) File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute raise Fail(error_message) resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. stdout: 2019-04-16 15:55:43,864 - Host checks started. 2019-04-16 15:55:43,864 - Check execute list: db_connection_check 2019-04-16 15:55:43,864 - DB connection check started. WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before WARNING: File /var/lib/ambari-agent/cache/mysql-connector-java.jar already exists, assuming it was downloaded before 2019-04-16 15:55:43,866 - call['/usr/jdk64/jdk1.8.0_112/bin/java -cp /var/lib/ambari-agent/cache/DBConnectionVerification.jar:/var/lib/ambari-agent/cache/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://master.rh.bigdata.cluster/hive" "hive" [PROTECTED] com.mysql.jdbc.Driver'] {} 2019-04-16 15:55:44,473 - call returned (1, "Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.\nERROR: Unable to connect to the DB. Please check DB connection properties.\ncom.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.") 2019-04-16 15:55:44,474 - DB connection check completed. 2019-04-16 15:55:44,476 - Host checks completed. 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Command failed after 1 tries
... View more
05-10-2019
01:09 PM
Hi Geoffrey Shelton Okot , first I want to thank you for your answer, I deleted hive and restarted all instalations and configurations as you said and the tuto explain but still the same error at the end. Here some screenshoots of the conf: mysql and systeme time : user and database on mysql : hive conf on ambari : I found this warning on /var/log/ambari-server/ambari-server-check-database.log :
... View more
05-10-2019
01:09 PM
Hello Geoffrey Shelton Okot , I tried with the "an existing Mysql database" but still have the same error, i also tried with the ip address and the localhost in the URL of the database but still nothing! here a screenshot of my conf PS : i cant see your screenshot can you upload it again ?
... View more
05-10-2019
01:09 PM
hello, Mysql-connector-java is updated ambari-server driver and db : still have the warning on my ambari-server start and finally the failed connection rom ambari : and here's the stderr & stdout : stderr: 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 530, in <module> CheckHost().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute method(env) File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute raise Fail(error_message) resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. stdout: 2019-04-16 15:55:43,864 - Host checks started. 2019-04-16 15:55:43,864 - Check execute list: db_connection_check 2019-04-16 15:55:43,864 - DB connection check started. WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before WARNING: File /var/lib/ambari-agent/cache/mysql-connector-java.jar already exists, assuming it was downloaded before 2019-04-16 15:55:43,866 - call['/usr/jdk64/jdk1.8.0_112/bin/java -cp /var/lib/ambari-agent/cache/DBConnectionVerification.jar:/var/lib/ambari-agent/cache/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://master.rh.bigdata.cluster/hive" "hive" [PROTECTED] com.mysql.jdbc.Driver'] {} 2019-04-16 15:55:44,473 - call returned (1, "Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.\nERROR: Unable to connect to the DB. Please check DB connection properties.\ncom.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.") 2019-04-16 15:55:44,474 - DB connection check completed. 2019-04-16 15:55:44,476 - Host checks completed. 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Command failed after 1 tries
... View more
05-10-2019
01:09 PM
hello, Mysql-connector-java is updated ambari-server driver and db : still have the warning on my ambari-server start and finally the failed connection rom ambari : and here's the stderr & stdout : stderr: 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 530, in <module> CheckHost().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute method(env) File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute raise Fail(error_message) resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. stdout: 2019-04-16 15:55:43,864 - Host checks started. 2019-04-16 15:55:43,864 - Check execute list: db_connection_check 2019-04-16 15:55:43,864 - DB connection check started. WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before WARNING: File /var/lib/ambari-agent/cache/mysql-connector-java.jar already exists, assuming it was downloaded before 2019-04-16 15:55:43,866 - call['/usr/jdk64/jdk1.8.0_112/bin/java -cp /var/lib/ambari-agent/cache/DBConnectionVerification.jar:/var/lib/ambari-agent/cache/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://master.rh.bigdata.cluster/hive" "hive" [PROTECTED] com.mysql.jdbc.Driver'] {} 2019-04-16 15:55:44,473 - call returned (1, "Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.\nERROR: Unable to connect to the DB. Please check DB connection properties.\ncom.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.") 2019-04-16 15:55:44,474 - DB connection check completed. 2019-04-16 15:55:44,476 - Host checks completed. 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Command failed after 1 tries
... View more
05-10-2019
01:09 PM
hello, Mysql-connector-java is updated ambari-server driver and db : still have the warning on my ambari-server start and finally the failed connection rom ambari : and here's the stderr & stdout : stderr: 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 530, in <module> CheckHost().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute method(env) File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute raise Fail(error_message) resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. stdout: 2019-04-16 15:55:43,864 - Host checks started. 2019-04-16 15:55:43,864 - Check execute list: db_connection_check 2019-04-16 15:55:43,864 - DB connection check started. WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before WARNING: File /var/lib/ambari-agent/cache/mysql-connector-java.jar already exists, assuming it was downloaded before 2019-04-16 15:55:43,866 - call['/usr/jdk64/jdk1.8.0_112/bin/java -cp /var/lib/ambari-agent/cache/DBConnectionVerification.jar:/var/lib/ambari-agent/cache/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://master.rh.bigdata.cluster/hive" "hive" [PROTECTED] com.mysql.jdbc.Driver'] {} 2019-04-16 15:55:44,473 - call returned (1, "Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.\nERROR: Unable to connect to the DB. Please check DB connection properties.\ncom.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.") 2019-04-16 15:55:44,474 - DB connection check completed. 2019-04-16 15:55:44,476 - Host checks completed. 2019-04-16 15:55:44,477 - Check db_connection_check was unsuccessful. Exit code: 1. Message: Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary. ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. Command failed after 1 tries
... View more