Member since
11-03-2020
12
Posts
0
Kudos Received
0
Solutions
11-08-2020
05:52 PM
@Shelton # Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
<master-ip> master <hostname>-00
<slave-ip> slave01 <hostname>-01
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts For the yarn-site.xml file in the hadoop folder this is one of the configs <property>
<name>yarn.resourcemanager.hostname</name>
<value><master-ip></value>
</property>
... View more
11-08-2020
07:05 AM
MASTER | RESOURCEMANAGER LOGS| 2020-11-08 14:45:29,854 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting ResourceManager
STARTUP_MSG: host = bupry-dev-00/<IP-MASTER>
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/lib/jvm/java-8-openjdk-amd64/jre//lib/tools.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop//rm-config/log4j.properties
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_272
************************************************************/
2020-11-08 14:45:29,862 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT]
2020-11-08 14:45:30,078 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/core-site.xml
2020-11-08 14:45:30,110 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2020-11-08 14:45:30,141 INFO org.apache.hadoop.security.Groups: clearing userToGroupsMap cache
2020-11-08 14:45:30,195 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/yarn-site.xml
2020-11-08 14:45:30,307 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher
2020-11-08 14:45:30,349 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms
2020-11-08 14:45:30,353 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms
2020-11-08 14:45:30,357 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms
2020-11-08 14:45:30,385 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler
2020-11-08 14:45:30,387 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager
2020-11-08 14:45:30,387 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
2020-11-08 14:45:30,402 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher
2020-11-08 14:45:30,403 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher
2020-11-08 14:45:30,404 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher
2020-11-08 14:45:30,405 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher
2020-11-08 14:45:30,457 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-11-08 14:45:30,520 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2020-11-08 14:45:30,520 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system started
2020-11-08 14:45:30,531 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager
2020-11-08 14:45:30,538 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher
2020-11-08 14:45:30,540 INFO org.apache.hadoop.yarn.server.resourcemanager.RMNMInfo: Registered RMNMInfo MBean
2020-11-08 14:45:30,542 INFO org.apache.hadoop.yarn.security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instiantiated.
2020-11-08 14:45:30,542 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2020-11-08 14:45:30,543 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/capacity-scheduler.xml
2020-11-08 14:45:30,580 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
2020-11-08 14:45:30,580 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
2020-11-08 14:45:30,586 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=ADMINISTER_QUEUE:*SUBMIT_APP:*, labels=*,
, reservationsContinueLooking=true
2020-11-08 14:45:30,586 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
2020-11-08 14:45:30,593 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
2020-11-08 14:45:30,593 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
2020-11-08 14:45:30,594 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
capacity = 1.0 [= (float) configuredCapacity / 100 ]
asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
maxCapacity = 1.0 [= configuredMaxCapacity ]
absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
userLimit = 100 [= configuredUserLimit ]
userLimitFactor = 1.0 [= configuredUserLimitFactor ]
maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
maxAMResourcePerQueuePercent = 0.9 [= configuredMaximumAMResourcePercent ]
minimumAllocationFactor = 0.9444444 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
maximumAllocation = <memory:9216, vCores:4> [= configuredMaxAllocation ]
numContainers = 0 [= currentNumContainers ]
state = RUNNING [= configuredState ]
acls = ADMINISTER_QUEUE:*SUBMIT_APP:* [= configuredAcls ]
nodeLocalityDelay = 40
labels=*,
nodeLocalityDelay = 40
reservationsContinueLooking = true
preemptionDisabled = true
2020-11-08 14:45:30,594 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
2020-11-08 14:45:30,595 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2020-11-08 14:45:30,595 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
2020-11-08 14:45:30,595 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue mappings, override: false
2020-11-08 14:45:30,595 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:512, vCores:1>>, maximumAllocation=<<memory:9216, vCores:4>>, asynchronousScheduling=false, asyncScheduleInterval=5ms
2020-11-08 14:45:30,604 INFO org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher: YARN system metrics publishing service is not enabled
2020-11-08 14:45:30,604 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state
2020-11-08 14:45:30,614 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating AMRMToken
2020-11-08 14:45:30,614 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
2020-11-08 14:45:30,615 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
2020-11-08 14:45:30,615 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-11-08 14:45:30,615 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 1
2020-11-08 14:45:30,615 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2020-11-08 14:45:30,616 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-11-08 14:45:30,616 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-11-08 14:45:30,616 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 2
2020-11-08 14:45:30,616 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
2020-11-08 14:45:30,618 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler
2020-11-08 14:45:30,641 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:30,666 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8031
2020-11-08 14:45:30,679 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server
2020-11-08 14:45:30,679 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:30,679 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8031: starting
2020-11-08 14:45:30,698 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:30,702 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8030
2020-11-08 14:45:30,711 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2020-11-08 14:45:30,711 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:30,711 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8030: starting
2020-11-08 14:45:30,753 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:30,754 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8032
2020-11-08 14:45:30,756 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server
2020-11-08 14:45:30,758 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:30,758 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
2020-11-08 14:45:30,765 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
2020-11-08 14:45:30,823 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2020-11-08 14:45:30,828 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-11-08 14:45:30,844 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
2020-11-08 14:45:30,850 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-11-08 14:45:30,852 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
2020-11-08 14:45:30,852 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
2020-11-08 14:45:30,852 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
2020-11-08 14:45:30,853 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
2020-11-08 14:45:30,853 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-11-08 14:45:30,853 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-11-08 14:45:30,855 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
2020-11-08 14:45:30,856 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2020-11-08 14:45:31,137 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2020-11-08 14:45:31,140 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8088
2020-11-08 14:45:31,140 INFO org.mortbay.log: jetty-6.1.26
2020-11-08 14:45:31,159 INFO org.mortbay.log: Extract jar:file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar!/webapps/cluster to /tmp/Jetty_138_68_238_32_8088_cluster____.4zd8nt/webapp
2020-11-08 14:45:31,260 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-11-08 14:45:31,260 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-11-08 14:45:31,260 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-11-08 14:45:31,826 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@<IP-MASTER>:8088
2020-11-08 14:45:31,826 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app cluster started at 8088
2020-11-08 14:45:31,847 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:31,847 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8033
2020-11-08 14:45:31,848 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
2020-11-08 14:45:31,849 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:31,849 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
2020-11-08 14:45:33,645 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-08 14:45:33,647 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node slave01(cmPort: 38241 httpPort: 8042) registered with capability: <memory:28672, vCores:6>, assigned nodeId slave01:38241
2020-11-08 14:45:33,650 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: slave01:38241 Node Transitioned from NEW to RUNNING
2020-11-08 14:45:33,654 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node slave01:38241 clusterResource: <memory:28672, vCores:6>
2020-11-08 14:45:57,812 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2020-11-08 14:46:03,443 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 1 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2020-11-08 14:46:03,444 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user bupry_dev
2020-11-08 14:46:03,444 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1604846730605_0001
2020-11-08 14:46:03,446 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bupry_dev IP=<IP-MASTER> OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1604846730605_0001
2020-11-08 14:46:03,451 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1604846730605_0001
2020-11-08 14:46:03,451 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from NEW to NEW_SAVING
2020-11-08 14:46:03,452 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from NEW_SAVING to SUBMITTED
2020-11-08 14:46:03,453 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application added - appId: application_1604846730605_0001 user: bupry_dev leaf-queue of parent: root #applications: 1
2020-11-08 14:46:03,453 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted application application_1604846730605_0001 from user: bupry_dev, in queue: default
2020-11-08 14:46:03,471 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from SUBMITTED to ACCEPTED
2020-11-08 14:46:03,491 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1604846730605_0001_000001
2020-11-08 14:46:03,492 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from NEW to SUBMITTED
2020-11-08 14:46:03,502 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1604846730605_0001 from user: bupry_dev activated in queue: default
2020-11-08 14:46:03,503 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1604846730605_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@339daf0, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2020-11-08 14:46:03,503 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1604846730605_0001_000001 to scheduler from user bupry_dev in queue default
2020-11-08 14:46:03,515 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from SUBMITTED to SCHEDULED
2020-11-08 14:46:03,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1604846730605_0001_01_000001 Container Transitioned from NEW to ALLOCATED
2020-11-08 14:46:03,800 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bupry_dev OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1604846730605_0001 CONTAINERID=container_1604846730605_0001_01_000001
2020-11-08 14:46:03,801 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1604846730605_0001_01_000001 of capacity <memory:1024, vCores:1> on host slave01:38241, which has 1 containers, <memory:1024, vCores:1> used and <memory:27648, vCores:5> available after allocation
2020-11-08 14:46:03,802 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1604846730605_0001_000001 container=Container: [ContainerId: container_1604846730605_0001_01_000001, NodeId: slave01:38241, NodeHttpAddress: slave01:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:28672, vCores:6> type=OFF_SWITCH
2020-11-08 14:46:03,802 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1024, vCores:1>, usedCapacity=0.035714287, absoluteUsedCapacity=0.035714287, numApps=1, numContainers=1
2020-11-08 14:46:03,802 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.035714287 absoluteUsedCapacity=0.035714287 used=<memory:1024, vCores:1> cluster=<memory:28672, vCores:6>
2020-11-08 14:46:03,814 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave01:38241 for container : container_1604846730605_0001_01_000001
2020-11-08 14:46:03,821 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1604846730605_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2020-11-08 14:46:03,822 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1604846730605_0001_000001
2020-11-08 14:46:03,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1604846730605_0001 AttemptId: appattempt_1604846730605_0001_000001 MasterContainer: Container: [ContainerId: container_1604846730605_0001_01_000001, NodeId: slave01:38241, NodeHttpAddress: slave01:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: <IP-MASTER>:38241 }, ]
2020-11-08 14:46:03,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING
2020-11-08 14:46:03,835 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED
2020-11-08 14:46:03,837 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1604846730605_0001_000001
2020-11-08 14:46:03,862 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1604846730605_0001_01_000001, NodeId: slave01:38241, NodeHttpAddress: slave01:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: <IP-MASTER>:38241 }, ] for AM appattempt_1604846730605_0001_000001
2020-11-08 14:46:03,862 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1604846730605_0001_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir={{PWD}}/tmp,-Dspark.yarn.app.container.log.dir=<LOG_DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'master:37353',--properties-file,{{PWD}}/__spark_conf__/__spark_conf__.properties,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr
2020-11-08 14:46:03,864 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1604846730605_0001_000001
2020-11-08 14:46:03,866 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1604846730605_0001_000001
2020-11-08 14:49:44,697 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from ACCEPTED to KILLING
2020-11-08 14:49:44,699 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1604846730605_0001_000001 with final state: KILLED, and exit status: -1000
2020-11-08 14:49:44,700 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from ALLOCATED to FINAL_SAVING
2020-11-08 14:49:44,700 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1604846730605_0001_000001
2020-11-08 14:49:44,701 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1604846730605_0001_000001
2020-11-08 14:49:44,702 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1604846730605_0001_000001 State change from FINAL_SAVING to KILLED
2020-11-08 14:49:44,702 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1604846730605_0001 with final state: KILLED
2020-11-08 14:49:44,702 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from KILLING to FINAL_SAVING
2020-11-08 14:49:44,703 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1604846730605_0001_000001 is done. finalState=KILLED
2020-11-08 14:49:44,703 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1604846730605_0001
2020-11-08 14:49:44,704 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1604846730605_0001_000001
2020-11-08 14:49:44,706 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1604846730605_0001 State change from FINAL_SAVING to KILLED
2020-11-08 14:49:44,707 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bupry_dev OPERATION=Application Finished - Killed TARGET=RMAppManager RESULT=SUCCESS APPID=application_1604846730605_0001
2020-11-08 14:49:44,709 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1604846730605_0001,name=alma_v2,user=bupry_dev,queue=default,state=KILLED,trackingUrl=http://master:8088/cluster/app/application_1604846730605_0001,appMasterHost=N/A,startTime=1604846763443,finishTime=1604846984702,finalStatus=KILLED,memorySeconds=226166,vcoreSeconds=220,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1604846730605_0001_01_000001 Container Transitioned from ACQUIRED to KILLED
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1604846730605_0001_01_000001 in state: KILLED event:KILL
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bupry_dev OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1604846730605_0001 CONTAINERID=container_1604846730605_0001_01_000001
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1604846730605_0001_01_000001 of capacity <memory:1024, vCores:1> on host slave01:38241, which currently has 0 containers, <memory:0, vCores:0> used and <memory:28672, vCores:6> available, release resources=true
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=bupry_dev user-resources=<memory:0, vCores:0>
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1604846730605_0001_01_000001, NodeId: slave01:38241, NodeHttpAddress: slave01:8042, Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: <IP-MASTER>:38241 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:28672, vCores:6>
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:28672, vCores:6>
2020-11-08 14:49:44,711 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2020-11-08 14:49:44,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1604846730605_0001_000001 released container container_1604846730605_0001_01_000001 on node: host: slave01:38241 #containers=0 available=<memory:28672, vCores:6> used=<memory:0, vCores:0> with event: KILL
2020-11-08 14:49:44,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1604846730605_0001 requests cleared
2020-11-08 14:49:44,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1604846730605_0001 user: bupry_dev queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2020-11-08 14:49:44,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1604846730605_0001 user: bupry_dev leaf-queue of parent: root #applications: 0
2020-11-08 14:49:44,911 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=bupry_dev IP=<IP-MASTER> OPERATION=Kill Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1604846730605_0001
2020-11-08 14:50:18,932 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: RECEIVED SIGNAL 15: SIGTERM
2020-11-08 14:50:18,938 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2020-11-08 14:50:18,939 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@<IP-MASTER>:8088
2020-11-08 14:50:19,040 INFO org.apache.hadoop.ipc.Server: Stopping server on 8032
2020-11-08 14:50:19,042 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8032
2020-11-08 14:50:19,042 INFO org.apache.hadoop.ipc.Server: Stopping server on 8033
2020-11-08 14:50:19,043 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8033
2020-11-08 14:50:19,043 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2020-11-08 14:50:19,046 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
2020-11-08 14:50:19,046 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2020-11-08 14:50:19,046 WARN org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher: org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher$LauncherThread interrupted. Returning.
2020-11-08 14:50:19,046 INFO org.apache.hadoop.ipc.Server: Stopping server on 8030
2020-11-08 14:50:19,051 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8030
2020-11-08 14:50:19,051 INFO org.apache.hadoop.ipc.Server: Stopping server on 8031
2020-11-08 14:50:19,051 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2020-11-08 14:50:19,053 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: NMLivelinessMonitor thread interrupted
2020-11-08 14:50:19,053 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Returning, interrupted : java.lang.InterruptedException
2020-11-08 14:50:19,053 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events.
2020-11-08 14:50:19,055 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8031
2020-11-08 14:50:19,055 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2020-11-08 14:50:19,055 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2020-11-08 14:50:19,055 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer thread interrupted
2020-11-08 14:50:19,055 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: AMLivelinessMonitor thread interrupted
2020-11-08 14:50:19,056 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
2020-11-08 14:50:19,057 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ResourceManager metrics system...
2020-11-08 14:50:19,059 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system stopped.
2020-11-08 14:50:19,059 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system shutdown complete.
2020-11-08 14:50:19,059 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events.
2020-11-08 14:50:19,060 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
2020-11-08 14:50:19,060 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down ResourceManager at bupry-dev-00/<IP-MASTER>
... View more
11-08-2020
07:04 AM
MASTER | NAMENODE LOGS | 2020-11-08 14:45:14,292 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/<IP-MASTER>
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_272
************************************************************/
2020-11-08 14:45:14,300 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-11-08 14:45:14,303 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2020-11-08 14:45:14,525 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-11-08 14:45:14,609 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2020-11-08 14:45:14,609 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2020-11-08 14:45:14,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master/:9000
2020-11-08 14:45:14,750 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2020-11-08 14:45:14,792 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2020-11-08 14:45:14,798 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-11-08 14:45:14,803 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2020-11-08 14:45:14,808 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-11-08 14:45:14,810 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2020-11-08 14:45:14,810 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-11-08 14:45:14,810 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-11-08 14:45:14,933 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2020-11-08 14:45:14,935 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2020-11-08 14:45:14,949 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2020-11-08 14:45:14,949 INFO org.mortbay.log: jetty-6.1.26
2020-11-08 14:45:15,069 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2020-11-08 14:45:15,091 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode should be specified as a URI in configuration files. Please update hdfs configuration.
2020-11-08 14:45:15,091 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode should be specified as a URI in configuration files. Please update hdfs configuration.
2020-11-08 14:45:15,091 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2020-11-08 14:45:15,091 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2020-11-08 14:45:15,095 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode should be specified as a URI in configuration files. Please update hdfs configuration.
2020-11-08 14:45:15,096 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode should be specified as a URI in configuration files. Please update hdfs configuration.
2020-11-08 14:45:15,118 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2020-11-08 14:45:15,119 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2020-11-08 14:45:15,154 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2020-11-08 14:45:15,154 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2020-11-08 14:45:15,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-11-08 14:45:15,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2020 Nov 08 14:45:15
2020-11-08 14:45:15,156 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2020-11-08 14:45:15,156 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2020-11-08 14:45:15,157 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2020-11-08 14:45:15,157 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2020-11-08 14:45:15,164 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2020-11-08 14:45:15,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = bupry_dev (auth:SIMPLE)
2020-11-08 14:45:15,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2020-11-08 14:45:15,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2020-11-08 14:45:15,170 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2020-11-08 14:45:15,171 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2020-11-08 14:45:15,215 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2020-11-08 14:45:15,215 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2020-11-08 14:45:15,215 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2020-11-08 14:45:15,215 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2020-11-08 14:45:15,216 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2020-11-08 14:45:15,216 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2020-11-08 14:45:15,216 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2020-11-08 14:45:15,216 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2020-11-08 14:45:15,222 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2020-11-08 14:45:15,222 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2020-11-08 14:45:15,222 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2020-11-08 14:45:15,222 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2020-11-08 14:45:15,224 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2020-11-08 14:45:15,224 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2020-11-08 14:45:15,224 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2020-11-08 14:45:15,226 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-11-08 14:45:15,226 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2020-11-08 14:45:15,226 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-11-08 14:45:15,227 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2020-11-08 14:45:15,227 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2020-11-08 14:45:15,229 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2020-11-08 14:45:15,229 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2020-11-08 14:45:15,229 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2020-11-08 14:45:15,229 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2020-11-08 14:45:15,237 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/in_use.lock acquired by nodename 17835@master
2020-11-08 14:45:15,274 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current
2020-11-08 14:45:15,301 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_inprogress_0000000000000000270 -> /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_0000000000000000270-0000000000000000270
2020-11-08 14:45:15,308 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/fsimage_0000000000000000269, cpktTxId=0000000000000000269)
2020-11-08 14:45:15,393 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 37 INodes.
2020-11-08 14:45:15,439 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2020-11-08 14:45:15,439 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 269 from /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/fsimage_0000000000000000269
2020-11-08 14:45:15,440 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@55562aa9 expecting start txid #270
2020-11-08 14:45:15,440 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_0000000000000000270-0000000000000000270
2020-11-08 14:45:15,441 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_0000000000000000270-0000000000000000270' to transaction ID 270
2020-11-08 14:45:15,444 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_0000000000000000270-0000000000000000270 of size 1048576 edits # 1 loaded in 0 seconds
2020-11-08 14:45:15,448 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2020-11-08 14:45:15,450 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 271
2020-11-08 14:45:15,530 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2020-11-08 14:45:15,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 299 msecs
2020-11-08 14:45:15,648 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master:8020
2020-11-08 14:45:15,653 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:15,662 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2020-11-08 14:45:15,681 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master:8020 to access this namenode/service.
2020-11-08 14:45:15,683 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2020-11-08 14:45:15,684 WARN org.apache.hadoop.hdfs.server.common.Util: Path /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode should be specified as a URI in configuration files. Please update hdfs configuration.
2020-11-08 14:45:15,690 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2020-11-08 14:45:15,690 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2020-11-08 14:45:15,690 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON.
The reported blocks 0 needs additional 25 blocks to reach the threshold 0.9990 of total blocks 25.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
2020-11-08 14:45:15,695 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-11-08 14:45:15,714 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:15,715 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2020-11-08 14:45:15,723 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master/<IP-MASTER>:8020
2020-11-08 14:45:15,723 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2020-11-08 14:45:15,730 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2020-11-08 14:45:20,543 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(<IP-SLAVE>:50010, datanodeUuid=ae263c75-353e-4d4f-ba63-e827032d60cb, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-5a615ffa-9a18-4811-b160-5534b7ffd396;nsid=774215867;c=0) storage ae263c75-353e-4d4f-ba63-e827032d60cb
2020-11-08 14:45:20,544 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-11-08 14:45:20,545 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/<IP-SLAVE>:50010
2020-11-08 14:45:20,617 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Number of failed storage changes from 0 to 0
2020-11-08 14:45:20,617 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-910d1722-8b33-461e-96cd-af79f1472434 for DN <IP-SLAVE>:50010
2020-11-08 14:45:20,660 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered.
The reported blocks 24 has reached the threshold 0.9990 of total blocks 25. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 29 seconds.
2020-11-08 14:45:20,660 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2020-11-08 14:45:20,662 INFO BlockStateChange: BLOCK* processReport: from storage DS-910d1722-8b33-461e-96cd-af79f1472434 node DatanodeRegistration(<IP-SLAVE>:50010, datanodeUuid=ae263c75-353e-4d4f-ba63-e827032d60cb, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-5a615ffa-9a18-4811-b160-5534b7ffd396;nsid=774215867;c=0), blocks: 25, hasStaleStorage: false, processing time: 8 msecs
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks = 25
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks = 0
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 2
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of over-replicated blocks = 0
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written = 0
2020-11-08 14:45:20,665 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 4 msec
2020-11-08 14:45:40,665 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON, in safe mode extension.
The reported blocks 25 has reached the threshold 0.9990 of total blocks 25. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 9 seconds.
2020-11-08 14:45:50,667 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 35 secs
2020-11-08 14:45:50,667 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF
2020-11-08 14:45:50,667 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 1 datanodes
2020-11-08 14:45:50,668 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 2 blocks
2020-11-08 14:46:00,412 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} for /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_libs__1486408946155427377.zip
2020-11-08 14:46:01,030 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} for /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_libs__1486408946155427377.zip
2020-11-08 14:46:01,048 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: <IP-SLAVE>:50010 is added to blk_1073741854_1030{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} size 134217728
2020-11-08 14:46:01,473 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: <IP-SLAVE>:50010 is added to blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} size 0
2020-11-08 14:46:01,479 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_libs__1486408946155427377.zip is closed by DFSClient_NONMAPREDUCE_1697118163_16
2020-11-08 14:46:01,591 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} for /user/bupry_dev/.sparkStaging/application_1604846730605_0001/pyspark.zip
2020-11-08 14:46:01,600 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: <IP-SLAVE>:50010 is added to blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} size 0
2020-11-08 14:46:01,602 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bupry_dev/.sparkStaging/application_1604846730605_0001/pyspark.zip is closed by DFSClient_NONMAPREDUCE_1697118163_16
2020-11-08 14:46:01,615 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741857_1033{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} for /user/bupry_dev/.sparkStaging/application_1604846730605_0001/py4j-0.10.7-src.zip
2020-11-08 14:46:01,623 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741857_1033{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/bupry_dev/.sparkStaging/application_1604846730605_0001/py4j-0.10.7-src.zip
2020-11-08 14:46:01,623 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: <IP-SLAVE>:50010 is added to blk_1073741857_1033{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} size 42437
2020-11-08 14:46:02,026 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bupry_dev/.sparkStaging/application_1604846730605_0001/py4j-0.10.7-src.zip is closed by DFSClient_NONMAPREDUCE_1697118163_16
2020-11-08 14:46:02,116 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741858_1034{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} for /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_conf__.zip
2020-11-08 14:46:02,130 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_1073741858_1034{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in file /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_conf__.zip
2020-11-08 14:46:02,131 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: <IP-SLAVE>:50010 is added to blk_1073741858_1034{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-910d1722-8b33-461e-96cd-af79f1472434:NORMAL:<IP-SLAVE>:50010|RBW]]} size 199682
2020-11-08 14:46:02,533 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/bupry_dev/.sparkStaging/application_1604846730605_0001/__spark_conf__.zip is closed by DFSClient_NONMAPREDUCE_1697118163_16
2020-11-08 14:46:25,353 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from <IP-MASTER>
2020-11-08 14:46:25,353 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2020-11-08 14:46:25,353 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 271
2020-11-08 14:46:25,353 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 35 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 2 Number of syncs: 25 SyncTimes(ms): 13
2020-11-08 14:46:25,354 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 35 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 2 Number of syncs: 26 SyncTimes(ms): 13
2020-11-08 14:46:25,355 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_inprogress_0000000000000000271 -> /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/edits_0000000000000000271-0000000000000000305
2020-11-08 14:46:25,355 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 306
2020-11-08 14:46:25,982 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.00s at 3000.00 KB/s
2020-11-08 14:46:25,982 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000305 size 3772 bytes.
2020-11-08 14:46:25,985 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 269
2020-11-08 14:46:25,985 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/hadoop/data/nameNode/current/fsimage_0000000000000000267, cpktTxId=0000000000000000267)
2020-11-08 14:50:00,130 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM
2020-11-08 14:50:00,132 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/<IP-MASTER>
************************************************************/
... View more
11-08-2020
06:34 AM
@Shelton SLAVE01 | HADOOP DATANODE LOGS| 2020-11-08 14:45:19,046 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = slave01/<IP-SLAVE>
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_272
************************************************************/
2020-11-08 14:45:19,053 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-11-08 14:45:19,556 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-11-08 14:45:19,619 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2020-11-08 14:45:19,619 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2020-11-08 14:45:19,624 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2020-11-08 14:45:19,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is slave01
2020-11-08 14:45:19,631 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2020-11-08 14:45:19,651 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2020-11-08 14:45:19,653 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2020-11-08 14:45:19,653 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2020-11-08 14:45:19,716 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2020-11-08 14:45:19,722 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-11-08 14:45:19,726 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2020-11-08 14:45:19,730 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-11-08 14:45:19,732 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2020-11-08 14:45:19,732 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-11-08 14:45:19,732 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-11-08 14:45:19,742 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 34275
2020-11-08 14:45:19,742 INFO org.mortbay.log: jetty-6.1.26
2020-11-08 14:45:19,857 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34275
2020-11-08 14:45:19,959 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2020-11-08 14:45:19,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = bupry_dev
2020-11-08 14:45:19,985 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2020-11-08 14:45:20,014 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:20,026 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2020-11-08 14:45:20,051 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2020-11-08 14:45:20,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2020-11-08 14:45:20,086 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2020-11-08 14:45:20,133 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to master/<IP-MASTER>:8020 starting to offer service
2020-11-08 14:45:20,140 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:20,140 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2020-11-08 14:45:20,359 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2020-11-08 14:45:20,365 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-bupry_dev/dfs/data/in_use.lock acquired by nodename 24009@slave01
2020-11-08 14:45:20,398 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-593037128-<IP-MASTER>-1604722719674
2020-11-08 14:45:20,398 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /tmp/hadoop-bupry_dev/dfs/data/current/BP-593037128-<IP-MASTER>-1604722719674
2020-11-08 14:45:20,400 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=774215867;bpid=BP-593037128-<IP-MASTER>-1604722719674;lv=-56;nsInfo=lv=-63;cid=CID-5a615ffa-9a18-4811-b160-5534b7ffd396;nsid=774215867;c=0;bpid=BP-593037128-<IP-MASTER>-1604722719674;dnuuid=ae263c75-353e-4d4f-ba63-e827032d60cb
2020-11-08 14:45:20,432 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-910d1722-8b33-461e-96cd-af79f1472434
2020-11-08 14:45:20,432 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /tmp/hadoop-bupry_dev/dfs/data/current, StorageType: DISK
2020-11-08 14:45:20,454 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2020-11-08 14:45:20,454 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-593037128-<IP-MASTER>-1604722719674
2020-11-08 14:45:20,455 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-593037128-<IP-MASTER>-1604722719674 on volume /tmp/hadoop-bupry_dev/dfs/data/current...
2020-11-08 14:45:20,460 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Cached dfsUsed found for /tmp/hadoop-bupry_dev/dfs/data/current/BP-593037128-<IP-MASTER>-1604722719674/current: 1223270400
2020-11-08 14:45:20,461 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-593037128-<IP-MASTER>-1604722719674 on /tmp/hadoop-bupry_dev/dfs/data/current: 7ms
2020-11-08 14:45:20,462 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-593037128-<IP-MASTER>-1604722719674: 8ms
2020-11-08 14:45:20,462 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-593037128-<IP-MASTER>-1604722719674 on volume /tmp/hadoop-bupry_dev/dfs/data/current...
2020-11-08 14:45:20,466 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-593037128-<IP-MASTER>-1604722719674 on volume /tmp/hadoop-bupry_dev/dfs/data/current: 4ms
2020-11-08 14:45:20,466 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 4ms
2020-11-08 14:45:20,520 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/tmp/hadoop-bupry_dev/dfs/data, DS-910d1722-8b33-461e-96cd-af79f1472434): no suitable block pools found to scan. Waiting 1690415876 ms.
2020-11-08 14:45:20,522 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1604867716522 with interval 21600000
2020-11-08 14:45:20,524 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-593037128-<IP-MASTER>-1604722719674 (Datanode Uuid null) service to master/<IP-MASTER>:8020 beginning handshake with NN
2020-11-08 14:45:20,557 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-593037128-<IP-MASTER>-1604722719674 (Datanode Uuid null) service to master/<IP-MASTER>:8020 successfully registered with NN
2020-11-08 14:45:20,557 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/<IP-MASTER>:8020 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2020-11-08 14:45:20,636 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-593037128-<IP-MASTER>-1604722719674 (Datanode Uuid ae263c75-353e-4d4f-ba63-e827032d60cb) service to master/<IP-MASTER>:8020 trying to claim ACTIVE state with txid=271
2020-11-08 14:45:20,636 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-593037128-<IP-MASTER>-1604722719674 (Datanode Uuid ae263c75-353e-4d4f-ba63-e827032d60cb) service to master/<IP-MASTER>:8020
2020-11-08 14:45:20,689 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xf368e026e278, containing 1 storage report(s), of which we sent 1. The reports had 25 total blocks and used 1 RPC(s). This took 2 msec to generate and 50 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2020-11-08 14:45:20,689 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-593037128-<IP-MASTER>-1604722719674
2020-11-08 14:46:00,527 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-593037128-<IP-MASTER>-1604722719674:blk_1073741854_1030 src: /<IP-MASTER>:33984 dest: /<IP-SLAVE>:50010
2020-11-08 14:46:01,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /<IP-MASTER>:33984, dest: /<IP-SLAVE>:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1697118163_16, offset: 0, srvID: ae263c75-353e-4d4f-ba63-e827032d60cb, blockid: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741854_1030, duration: 485437131
2020-11-08 14:46:01,028 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741854_1030, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2020-11-08 14:46:01,036 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-593037128-<IP-MASTER>-1604722719674:blk_1073741855_1031 src: /<IP-MASTER>:33986 dest: /<IP-SLAVE>:50010
2020-11-08 14:46:01,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /<IP-MASTER>:33986, dest: /<IP-SLAVE>:50010, bytes: 106894960, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1697118163_16, offset: 0, srvID: ae263c75-353e-4d4f-ba63-e827032d60cb, blockid: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741855_1031, duration: 434375849
2020-11-08 14:46:01,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741855_1031, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2020-11-08 14:46:01,595 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-593037128-<IP-MASTER>-1604722719674:blk_1073741856_1032 src: /<IP-MASTER>:33988 dest: /<IP-SLAVE>:50010
2020-11-08 14:46:01,599 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /<IP-MASTER>:33988, dest: /<IP-SLAVE>:50010, bytes: 593464, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1697118163_16, offset: 0, srvID: ae263c75-353e-4d4f-ba63-e827032d60cb, blockid: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741856_1032, duration: 2761981
2020-11-08 14:46:01,599 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741856_1032, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2020-11-08 14:46:01,619 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-593037128-<IP-MASTER>-1604722719674:blk_1073741857_1033 src: /<IP-MASTER>:33990 dest: /<IP-SLAVE>:50010
2020-11-08 14:46:01,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /<IP-MASTER>:33990, dest: /<IP-SLAVE>:50010, bytes: 42437, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1697118163_16, offset: 0, srvID: ae263c75-353e-4d4f-ba63-e827032d60cb, blockid: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741857_1033, duration: 1754280
2020-11-08 14:46:01,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741857_1033, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2020-11-08 14:46:02,119 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-593037128-<IP-MASTER>-1604722719674:blk_1073741858_1034 src: /<IP-MASTER>:33992 dest: /<IP-SLAVE>:50010
2020-11-08 14:46:02,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /<IP-MASTER>:33992, dest: /<IP-SLAVE>:50010, bytes: 199682, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1697118163_16, offset: 0, srvID: ae263c75-353e-4d4f-ba63-e827032d60cb, blockid: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741858_1034, duration: 8243607
2020-11-08 14:46:02,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-593037128-<IP-MASTER>-1604722719674:blk_1073741858_1034, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
... View more
11-08-2020
06:34 AM
@Shelton My current server sizes are the following: Master: 32GB RAM and CPU(s) 8 Slave: 32GB RAM and CPU(s) 8 My current spark config is that I'm running through Python3. import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("test").master("yarn").getOrCreate() And yes the size for the containers was just 2GB at that moment because I was testing different configurations, however I have tested also with bigger sizes (8GB, 16GB, 28GB) with the same results. SLAVE01 | YARN - NODEMANAGER LOGS | 2020-11-08 14:45:31,532 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NodeManager
STARTUP_MSG: host = bupry-01/<IP-SLAVE>
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop/:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/etc/hadoop//nm-config/log4j.properties
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_272
************************************************************/
2020-11-08 14:45:31,539 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX signal handlers for [TERM, HUP, INT]
2020-11-08 14:45:32,090 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher
2020-11-08 14:45:32,091 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher
2020-11-08 14:45:32,091 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService
2020-11-08 14:45:32,092 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
2020-11-08 14:45:32,092 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
2020-11-08 14:45:32,092 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher
2020-11-08 14:45:32,108 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
2020-11-08 14:45:32,109 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn.server.nodemanager.NodeManager
2020-11-08 14:45:32,141 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-11-08 14:45:32,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2020-11-08 14:45:32,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system started
2020-11-08 14:45:32,228 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
2020-11-08 14:45:32,230 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUploadService
2020-11-08 14:45:32,230 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: per directory file limit = 8192
2020-11-08 14:45:32,274 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: usercache path : file:/tmp/hadoop-bupry_dev/nm-local-dir/usercache_DEL_1604846732232
2020-11-08 14:45:32,321 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
2020-11-08 14:45:32,339 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config.
2020-11-08 14:45:32,339 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Adding auxiliary service httpshuffle, "mapreduce_shuffle"
2020-11-08 14:45:32,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Using ResourceCalculatorPlugin : org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@663c9e7a
2020-11-08 14:45:32,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Using ResourceCalculatorProcessTree : null
2020-11-08 14:45:32,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Physical memory check enabled: true
2020-11-08 14:45:32,389 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Virtual memory check enabled: true
2020-11-08 14:45:32,392 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: NodeManager configured with 28 G physical memory allocated to containers, which is more than 80% of the total physical memory available (31.4 G). Thrashing might happen.
2020-11-08 14:45:32,395 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Initialized nodemanager for null: physical-memory=28672 virtual-memory=60212 virtual-cores=6
2020-11-08 14:45:32,425 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:32,445 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 38241
2020-11-08 14:45:32,469 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the server
2020-11-08 14:45:32,469 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Blocking new container-requests as container manager rpc server is still starting.
2020-11-08 14:45:32,470 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:32,470 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 38241: starting
2020-11-08 14:45:32,476 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Updating node address : slave01:38241
2020-11-08 14:45:32,481 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-08 14:45:32,482 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8040
2020-11-08 14:45:32,483 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server
2020-11-08 14:45:32,484 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-08 14:45:32,484 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8040: starting
2020-11-08 14:45:32,484 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer started on port 8040
2020-11-08 14:45:32,494 INFO org.apache.hadoop.mapred.IndexCache: IndexCache created with max memory = 10485760
2020-11-08 14:45:32,502 INFO org.apache.hadoop.mapred.ShuffleHandler: httpshuffle listening on port 13562
2020-11-08 14:45:32,504 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManager started at bupry-01/<IP-SLAVE>:38241
2020-11-08 14:45:32,504 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManager bound to 0.0.0.0/0.0.0.0:0
2020-11-08 14:45:32,505 INFO org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042
2020-11-08 14:45:32,554 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2020-11-08 14:45:32,559 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-11-08 14:45:32,563 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.nodemanager is not defined
2020-11-08 14:45:32,569 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-11-08 14:45:32,570 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node
2020-11-08 14:45:32,570 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2020-11-08 14:45:32,570 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2020-11-08 14:45:32,572 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /node/*
2020-11-08 14:45:32,573 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2020-11-08 14:45:32,810 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2020-11-08 14:45:32,812 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8042
2020-11-08 14:45:32,813 INFO org.mortbay.log: jetty-6.1.26
2020-11-08 14:45:32,832 INFO org.mortbay.log: Extract jar:file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar!/webapps/node to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp
2020-11-08 14:45:33,485 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8042
2020-11-08 14:45:33,485 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app node started at 8042
2020-11-08 14:45:33,491 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /<IP-MASTER>:8031
2020-11-08 14:45:33,514 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []
2020-11-08 14:45:33,519 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]
2020-11-08 14:45:33,667 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -732063455
2020-11-08 14:45:33,670 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 2122884811
2020-11-08 14:45:33,670 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as slave01:38241 with total resource of <memory:28672, vCores:6>
2020-11-08 14:45:33,670 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
... View more
11-07-2020
07:15 AM
Hello @Shelton, I have a new problem and was wondering if you could help me out. https://community.cloudera.com/t5/Support-Questions/Process-Stuck-in-Hadoop-Cluster/td-p/305553 I'm trying to run a process and the yarn.nodemanager log get stuck in the following lines: 2020-11-07 04:19:34,342 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app node started at 8042
2020-11-07 04:19:34,347 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /138.68.238.32:8031
2020-11-07 04:19:34,368 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM container statuses: []
2020-11-07 04:19:34,373 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using containers :[]
2020-11-07 04:19:34,520 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id 1152592273
2020-11-07 04:19:34,523 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id -1064351767
2020-11-07 04:19:34,524 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as slave01:44367 with total resource of <memory:28672, vCores:6>
2020-11-07 04:19:34,524 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
... View more
11-06-2020
02:37 PM
Hello, I have a Hadoop Cluster where I am using HDFS and Yarn. On top of that I'm trying to run Spark to use it in the same cluster. I have everything communicating and working, however it seems that the Resource Manager is not giving the container resources and it is stuck in this: YarnApplicationState: ACCEPTED: waiting for AM container to be allocated, launched and register with RM. Is there a way to fix this?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
11-03-2020
03:38 PM
This is my current setup on /etc/hosts @Shelton # Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
#ip.ip.ip.ip master
#ip.ip.ip.ip slave01
# The following lines are desirable for IPv6 capable hosts
.... Do I need to create anything else? I have read in some guides people create folders in /hadoop/... called Workers or Masters
... View more
11-03-2020
03:04 PM
@Shelton container_1604444884749_0001_01_000001/ Nov 03, 2020 11:10:09 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
Nov 03, 2020 11:10:09 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Nov 03, 2020 11:10:09 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
Nov 03, 2020 11:10:09 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Nov 03, 2020 11:10:10 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
Nov 03, 2020 11:10:10 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
Nov 03, 2020 11:10:10 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
log4j:WARN No appenders could be found for logger (org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Syslog 2020-11-03 23:10:05,691 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1604444884749_0001_000001
2020-11-03 23:10:06,161 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2020-11-03 23:10:06,161 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 1604444884749 } attemptId: 1 } keyId: -7945587)
2020-11-03 23:10:06,627 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2020-11-03 23:10:06,639 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
2020-11-03 23:10:06,715 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2020-11-03 23:10:07,563 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2020-11-03 23:10:07,846 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2020-11-03 23:10:07,856 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2020-11-03 23:10:07,857 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2020-11-03 23:10:07,858 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2020-11-03 23:10:07,858 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2020-11-03 23:10:07,862 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2020-11-03 23:10:07,863 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2020-11-03 23:10:07,863 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2020-11-03 23:10:07,923 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://master:9000]
2020-11-03 23:10:07,969 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://master:9000]
2020-11-03 23:10:08,038 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://master:9000]
2020-11-03 23:10:08,056 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2020-11-03 23:10:08,107 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2020-11-03 23:10:08,249 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2020-11-03 23:10:08,390 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2020-11-03 23:10:08,390 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2020-11-03 23:10:08,401 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1604444884749_0001 to jobTokenSecretManager
2020-11-03 23:10:08,578 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1604444884749_0001 because: not enabled;
2020-11-03 23:10:08,601 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1604444884749_0001 = 24. Number of splits = 1
2020-11-03 23:10:08,602 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1604444884749_0001 = 1
2020-11-03 23:10:08,602 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from NEW to INITED
2020-11-03 23:10:08,603 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1604444884749_0001.
2020-11-03 23:10:08,643 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-03 23:10:08,676 INFO [Socket Reader #1 for port 32889] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 32889
2020-11-03 23:10:08,699 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2020-11-03 23:10:08,711 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-03 23:10:08,725 INFO [IPC Server listener on 32889] org.apache.hadoop.ipc.Server: IPC Server listener on 32889: starting
2020-11-03 23:10:08,744 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at slave01/master:32889
2020-11-03 23:10:08,850 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2020-11-03 23:10:08,865 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2020-11-03 23:10:08,878 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2020-11-03 23:10:08,887 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2020-11-03 23:10:08,891 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2020-11-03 23:10:08,891 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2020-11-03 23:10:08,894 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2020-11-03 23:10:08,894 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2020-11-03 23:10:09,453 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2020-11-03 23:10:09,454 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 33221
2020-11-03 23:10:09,455 INFO [main] org.mortbay.log: jetty-6.1.26
2020-11-03 23:10:09,546 INFO [main] org.mortbay.log: Extract jar:file:/home/bupry_dev/development/hadoop_home/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar!/webapps/mapreduce to /tmp/hadoop-bupry_dev/nm-local-dir/usercache/bupry_dev/appcache/application_1604444884749_0001/container_1604444884749_0001_01_000001/tmp/Jetty_0_0_0_0_33221_mapreduce____dn5byg/webapp
2020-11-03 23:10:11,044 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:33221
2020-11-03 23:10:11,053 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app mapreduce started at 33221
2020-11-03 23:10:11,065 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE job_1604444884749_0001
2020-11-03 23:10:11,069 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2020-11-03 23:10:11,081 INFO [Socket Reader #1 for port 43947] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43947
2020-11-03 23:10:11,107 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2020-11-03 23:10:11,115 INFO [IPC Server listener on 43947] org.apache.hadoop.ipc.Server: IPC Server listener on 43947: starting
2020-11-03 23:10:11,339 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2020-11-03 23:10:11,343 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2020-11-03 23:10:11,343 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2020-11-03 23:10:11,398 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /master:8030
2020-11-03 23:10:11,513 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: maxContainerCapability: <memory:9830, vCores:32>
2020-11-03 23:10:11,513 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: queue: default
2020-11-03 23:10:11,523 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2020-11-03 23:10:11,523 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2020-11-03 23:10:11,524 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
2020-11-03 23:10:11,548 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from INITED to SETUP
2020-11-03 23:10:11,564 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2020-11-03 23:10:11,577 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from SETUP to RUNNING
2020-11-03 23:10:11,616 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved bupry-01 to /default-rack
2020-11-03 23:10:11,619 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_m_000000 Task Transitioned from NEW to SCHEDULED
2020-11-03 23:10:11,633 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_r_000000 Task Transitioned from NEW to SCHEDULED
2020-11-03 23:10:11,636 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2020-11-03 23:10:11,636 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_r_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2020-11-03 23:10:11,650 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1228, vCores:1>
2020-11-03 23:10:11,657 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: reduceResourceRequest:<memory:1228, vCores:1>
2020-11-03 23:10:11,728 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1604444884749_0001, File: hdfs://master:9000/tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001/job_1604444884749_0001_1.jhist
2020-11-03 23:10:12,525 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2020-11-03 23:10:12,575 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=3 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:12,579 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:12,580 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:13,585 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:13,585 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:14,602 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2020-11-03 23:10:14,603 INFO [RMCommunicator Allocator] org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-03 23:10:14,604 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1604444884749_0001_01_000002 to attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:14,606 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:4096, vCores:1>
2020-11-03 23:10:14,606 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:14,606 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:14,683 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-03 23:10:14,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar file on the remote FS is hdfs://master:9000/tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001/job.jar
2020-11-03 23:10:14,723 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001/job.xml
2020-11-03 23:10:14,728 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0 tokens and #1 secret keys for NM use for launching container
2020-11-03 23:10:14,728 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 1
2020-11-03 23:10:14,728 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2020-11-03 23:10:14,759 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2020-11-03 23:10:14,776 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1604444884749_0001_01_000002 taskAttempt attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:14,782 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:14,783 INFO [ContainerLauncher #0] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:14,879 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1604444884749_0001_m_000000_0 : 13562
2020-11-03 23:10:14,881 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1604444884749_0001_m_000000_0] using containerId: [container_1604444884749_0001_01_000002 on NM: [slave01:40455]
2020-11-03 23:10:14,883 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2020-11-03 23:10:14,883 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: ATTEMPT_START task_1604444884749_0001_m_000000
2020-11-03 23:10:14,883 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_m_000000 Task Transitioned from SCHEDULED to RUNNING
2020-11-03 23:10:15,617 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=3 release= 0 newContainers=0 finishedContainers=1 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:15,617 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1604444884749_0001_01_000002
2020-11-03 23:10:15,618 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:15,618 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:15,618 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:15,621 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
2020-11-03 23:10:15,621 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1604444884749_0001_m_000000_0: Exception from container-launch.
Container id: container_1604444884749_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
2020-11-03 23:10:15,630 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1604444884749_0001_01_000002 taskAttempt attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:15,634 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:15,638 INFO [ContainerLauncher #1] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:15,672 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2020-11-03 23:10:15,684 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2020-11-03 23:10:15,698 WARN [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://master:9000/output5/_temporary/1/_temporary/attempt_1604444884749_0001_m_000000_0
2020-11-03 23:10:15,699 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_0 TaskAttempt Transitioned from FAIL_TASK_CLEANUP to FAILED
2020-11-03 23:10:15,705 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved bupry-01 to /default-rack
2020-11-03 23:10:15,705 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on node slave01
2020-11-03 23:10:15,706 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from NEW to UNASSIGNED
2020-11-03 23:10:15,708 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1604444884749_0001_m_000000_1 to list of failed maps
2020-11-03 23:10:16,618 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:16,624 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:16,625 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:16,625 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:17,631 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2020-11-03 23:10:17,631 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1604444884749_0001_01_000003, NodeId: slave01:40455, NodeHttpAddress: slave01:8042, Resource: <memory:2048, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: master:40455 }, ] to fast fail map
2020-11-03 23:10:17,631 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2020-11-03 23:10:17,632 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1604444884749_0001_01_000003 to attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:17,632 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:4096, vCores:1>
2020-11-03 23:10:17,632 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:17,632 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:17,632 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-03 23:10:17,633 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2020-11-03 23:10:17,645 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1604444884749_0001_01_000003 taskAttempt attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:17,645 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:17,645 INFO [ContainerLauncher #2] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:17,670 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1604444884749_0001_m_000000_1 : 13562
2020-11-03 23:10:17,670 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1604444884749_0001_m_000000_1] using containerId: [container_1604444884749_0001_01_000003 on NM: [slave01:40455]
2020-11-03 23:10:17,671 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from ASSIGNED to RUNNING
2020-11-03 23:10:17,671 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: ATTEMPT_START task_1604444884749_0001_m_000000
2020-11-03 23:10:18,637 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=1 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:18,637 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1604444884749_0001_01_000003
2020-11-03 23:10:18,637 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:18,637 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:18,637 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:18,638 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
2020-11-03 23:10:18,638 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1604444884749_0001_m_000000_1: Exception from container-launch.
Container id: container_1604444884749_0001_01_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
2020-11-03 23:10:18,649 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1604444884749_0001_01_000003 taskAttempt attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:18,653 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:18,653 INFO [ContainerLauncher #3] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:18,694 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2020-11-03 23:10:18,704 INFO [CommitterEvent Processor #2] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2020-11-03 23:10:18,709 WARN [CommitterEvent Processor #2] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://master:9000/output5/_temporary/1/_temporary/attempt_1604444884749_0001_m_000000_1
2020-11-03 23:10:18,709 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_1 TaskAttempt Transitioned from FAIL_TASK_CLEANUP to FAILED
2020-11-03 23:10:18,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved bupry-01 to /default-rack
2020-11-03 23:10:18,710 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on node slave01
2020-11-03 23:10:18,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from NEW to UNASSIGNED
2020-11-03 23:10:18,710 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1604444884749_0001_m_000000_2 to list of failed maps
2020-11-03 23:10:19,638 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:19,642 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:19,642 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:19,642 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:20,650 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2020-11-03 23:10:20,650 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1604444884749_0001_01_000004, NodeId: slave01:40455, NodeHttpAddress: slave01:8042, Resource: <memory:2048, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: master:40455 }, ] to fast fail map
2020-11-03 23:10:20,650 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2020-11-03 23:10:20,651 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1604444884749_0001_01_000004 to attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:20,651 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:4096, vCores:1>
2020-11-03 23:10:20,651 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:20,651 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:20,651 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-03 23:10:20,652 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2020-11-03 23:10:20,663 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1604444884749_0001_01_000004 taskAttempt attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:20,663 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:20,663 INFO [ContainerLauncher #4] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:20,689 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1604444884749_0001_m_000000_2 : 13562
2020-11-03 23:10:20,690 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1604444884749_0001_m_000000_2] using containerId: [container_1604444884749_0001_01_000004 on NM: [slave01:40455]
2020-11-03 23:10:20,690 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from ASSIGNED to RUNNING
2020-11-03 23:10:20,690 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: ATTEMPT_START task_1604444884749_0001_m_000000
2020-11-03 23:10:21,656 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=1 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:21,657 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1604444884749_0001_01_000004
2020-11-03 23:10:21,657 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:21,657 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:21,657 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:21,657 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
2020-11-03 23:10:21,657 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1604444884749_0001_m_000000_2: Exception from container-launch.
Container id: container_1604444884749_0001_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
2020-11-03 23:10:21,668 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1604444884749_0001_01_000004 taskAttempt attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:21,672 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:21,672 INFO [ContainerLauncher #5] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:21,699 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2020-11-03 23:10:21,712 INFO [CommitterEvent Processor #3] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2020-11-03 23:10:21,715 WARN [CommitterEvent Processor #3] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://master:9000/output5/_temporary/1/_temporary/attempt_1604444884749_0001_m_000000_2
2020-11-03 23:10:21,716 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_2 TaskAttempt Transitioned from FAIL_TASK_CLEANUP to FAILED
2020-11-03 23:10:21,716 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved bupry-01 to /default-rack
2020-11-03 23:10:21,717 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on node slave01
2020-11-03 23:10:21,717 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host slave01
2020-11-03 23:10:21,717 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from NEW to UNASSIGNED
2020-11-03 23:10:21,717 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1604444884749_0001_m_000000_3 to list of failed maps
2020-11-03 23:10:22,657 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:22,662 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:22,662 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the blacklist for application_1604444884749_0001: blacklistAdditions=1 blacklistRemovals=0
2020-11-03 23:10:22,662 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2020-11-03 23:10:22,662 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:22,662 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:23,667 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the blacklist for application_1604444884749_0001: blacklistAdditions=0 blacklistRemovals=1
2020-11-03 23:10:23,667 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:23,667 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:24,675 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2020-11-03 23:10:24,675 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1604444884749_0001_01_000005, NodeId: slave01:40455, NodeHttpAddress: slave01:8042, Resource: <memory:2048, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: master:40455 }, ] to fast fail map
2020-11-03 23:10:24,675 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2020-11-03 23:10:24,676 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1604444884749_0001_01_000005 to attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:24,676 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:4096, vCores:1>
2020-11-03 23:10:24,676 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:24,676 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:24,676 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.util.RackResolver: Resolved slave01 to /default-rack
2020-11-03 23:10:24,676 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2020-11-03 23:10:24,687 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1604444884749_0001_01_000005 taskAttempt attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:24,687 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:24,687 INFO [ContainerLauncher #6] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:24,711 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1604444884749_0001_m_000000_3 : 13562
2020-11-03 23:10:24,712 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1604444884749_0001_m_000000_3] using containerId: [container_1604444884749_0001_01_000005 on NM: [slave01:40455]
2020-11-03 23:10:24,712 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from ASSIGNED to RUNNING
2020-11-03 23:10:24,712 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: ATTEMPT_START task_1604444884749_0001_m_000000
2020-11-03 23:10:25,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1604444884749_0001: ask=1 release= 0 newContainers=0 finishedContainers=1 resourcelimit=<memory:6144, vCores:1> knownNMs=1
2020-11-03 23:10:25,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1604444884749_0001_01_000005
2020-11-03 23:10:25,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:25,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold not met. completedMapsForReduceSlowstart 1
2020-11-03 23:10:25,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:25,681 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
2020-11-03 23:10:25,681 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1604444884749_0001_m_000000_3: Exception from container-launch.
Container id: container_1604444884749_0001_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
2020-11-03 23:10:25,690 INFO [ContainerLauncher #7] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_1604444884749_0001_01_000005 taskAttempt attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:25,691 INFO [ContainerLauncher #7] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:25,691 INFO [ContainerLauncher #7] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : slave01:40455
2020-11-03 23:10:25,723 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
2020-11-03 23:10:25,734 INFO [CommitterEvent Processor #4] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2020-11-03 23:10:25,739 WARN [CommitterEvent Processor #4] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://master:9000/output5/_temporary/1/_temporary/attempt_1604444884749_0001_m_000000_3
2020-11-03 23:10:25,740 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_m_000000_3 TaskAttempt Transitioned from FAIL_TASK_CLEANUP to FAILED
2020-11-03 23:10:25,753 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_m_000000 Task Transitioned from RUNNING to FAILED
2020-11-03 23:10:25,753 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2020-11-03 23:10:25,753 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks failed. failedMaps:1 failedReduces:0
2020-11-03 23:10:25,773 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from RUNNING to FAIL_WAIT
2020-11-03 23:10:25,773 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_r_000000 Task Transitioned from SCHEDULED to KILL_WAIT
2020-11-03 23:10:25,774 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1604444884749_0001_r_000000_0 TaskAttempt Transitioned from UNASSIGNED to KILLED
2020-11-03 23:10:25,774 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Processing the event EventType: CONTAINER_DEALLOCATE
2020-11-03 23:10:25,774 ERROR [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1604444884749_0001_r_000000_0
2020-11-03 23:10:25,774 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1604444884749_0001_r_000000 Task Transitioned from KILL_WAIT to KILLED
2020-11-03 23:10:25,775 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from FAIL_WAIT to FAIL_ABORT
2020-11-03 23:10:25,775 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_ABORT
2020-11-03 23:10:25,782 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1604444884749_0001Job Transitioned from FAIL_ABORT to FAILED
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: RMCommunicator notified that shouldUnregistered is: true
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2020-11-03 23:10:25,793 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2020-11-03 23:10:25,794 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0
2020-11-03 23:10:26,676 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://master:9000/tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001/job_1604444884749_0001_1.jhist to hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001-1604445002998-bupry_dev-word+count-1604445025753-0-0-FAILED-default-1604445011544.jhist_tmp
2020-11-03 23:10:26,681 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:1 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:26,686 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating schedule, headroom=<memory:6144, vCores:1>
2020-11-03 23:10:26,686 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start threshold reached. Scheduling reduces.
2020-11-03 23:10:26,686 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps assigned. Ramping up all remaining reduces:1
2020-11-03 23:10:26,686 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:26,744 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001-1604445002998-bupry_dev-word+count-1604445025753-0-0-FAILED-default-1604445011544.jhist_tmp
2020-11-03 23:10:26,750 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://master:9000/tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001/job_1604444884749_0001_1_conf.xml to hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001_conf.xml_tmp
2020-11-03 23:10:26,810 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001_conf.xml_tmp
2020-11-03 23:10:26,832 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001.summary_tmp to hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001.summary
2020-11-03 23:10:26,836 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001_conf.xml_tmp to hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001_conf.xml
2020-11-03 23:10:26,839 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001-1604445002998-bupry_dev-word+count-1604445025753-0-0-FAILED-default-1604445011544.jhist_tmp to hdfs://master:9000/tmp/hadoop-yarn/staging/history/done_intermediate/bupry_dev/job_1604444884749_0001-1604445002998-bupry_dev-word+count-1604445025753-0-0-FAILED-default-1604445011544.jhist
2020-11-03 23:10:26,841 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2020-11-03 23:10:26,844 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Setting job diagnostics to Task failed task_1604444884749_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
2020-11-03 23:10:26,846 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: History url is http://bupry-01:19888/jobhistory/job/job_1604444884749_0001
2020-11-03 23:10:26,864 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Waiting for application to be successfully unregistered.
2020-11-03 23:10:27,867 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:1
2020-11-03 23:10:27,868 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://master:9000 /tmp/hadoop-yarn/staging/bupry_dev/.staging/job_1604444884749_0001
2020-11-03 23:10:27,873 INFO [Thread-70] org.apache.hadoop.ipc.Server: Stopping server on 43947
2020-11-03 23:10:27,875 INFO [IPC Server listener on 43947] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 43947
2020-11-03 23:10:27,876 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2020-11-03 23:10:27,875 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted Then the rest of the 4 containers, have just the following line in "stderr" Error: Could not find or load main class org.apache.hadoop.mapred.YarnChild
... View more
11-03-2020
09:09 AM
I have a cluster of 1 Master and 1 Slave that are connected and "probably" communicating, I have followed several guides to install and setup the cluster in which almost all of them are similar, only differences are the memory and cores assigned. Both my master and slave have 8vcores and 32GB each, with around 600GB of SD. On the UI I can see the node is healthy and connected. However when I try to run a hadoop task I get the following message: hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output
20/11/03 15:51:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/11/03 15:51:35 INFO client.RMProxy: Connecting to ResourceManager at master/master:8032
20/11/03 15:51:36 INFO input.FileInputFormat: Total input paths to process : 1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: number of splits:1
20/11/03 15:51:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1604418534431_0001
20/11/03 15:51:36 INFO impl.YarnClientImpl: Submitted application application_1604418534431_0001
20/11/03 15:51:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1604418534431_0001/
20/11/03 15:51:36 INFO mapreduce.Job: Running job: job_1604418534431_0001
20/11/03 15:51:43 INFO mapreduce.Job: Job job_1604418534431_0001 running in uber mode : false
20/11/03 15:51:43 INFO mapreduce.Job: map 0% reduce 0%
20/11/03 15:51:46 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_0, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:49 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_1, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:52 INFO mapreduce.Job: Task Id : attempt_1604418534431_0001_m_000000_2, Status : FAILED
Exception from container-launch.
Container id: container_1604418534431_0001_01_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
20/11/03 15:51:57 INFO mapreduce.Job: map 100% reduce 100%
20/11/03 15:51:58 INFO mapreduce.Job: Job job_1604418534431_0001 failed with state FAILED due to: Task failed task_1604418534431_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
20/11/03 15:51:58 INFO mapreduce.Job: Counters: 16
Job Counters
Failed map tasks=4
Killed reduce tasks=1
Launched map tasks=4
Other local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=3946
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=3946
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=3946
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=4845688
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0 What I am trying to do is the following: echo "hello world hello Hello" > ~/Downloads/test.txt
hadoop fs -mkdir /input
hadoop fs -put ~/Downloads/test.txt /input
hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar wordcount /input /output
... View more
Labels:
- Labels:
-
Apache Hadoop