INFO 2018-10-29 03:01:33,720 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:01:33,720 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:01:33,721 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:01:33,721 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:01:33,721 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:01:33,731 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:01:34,592 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:01:39,809 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_CLIENT of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:01:41,165 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_SERVER of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:01:42,049 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_CLIENT of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:01:42,947 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_SERVER of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:01:48,129 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_REGIONSERVER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:01:50,860 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component DATANODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:01:52,100 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:01:53,520 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HDFS_CLIENT of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:01:58,921 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HIVE_CLIENT of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:02:01,494 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component KAFKA_BROKER of service KAFKA of cluster Sandbox to the queue. INFO 2018-10-29 03:02:02,790 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component KNOX_GATEWAY of service KNOX of cluster Sandbox to the queue. INFO 2018-10-29 03:02:04,061 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HISTORYSERVER of service MAPREDUCE2 of cluster Sandbox to the queue. INFO 2018-10-29 03:02:07,918 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component OOZIE_CLIENT of service OOZIE of cluster Sandbox to the queue. INFO 2018-10-29 03:02:09,237 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component PIG of service PIG of cluster Sandbox to the queue. INFO 2018-10-29 03:02:10,653 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_ADMIN of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:02:13,402 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_USERSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:02:14,693 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SLIDER of service SLIDER of cluster Sandbox to the queue. INFO 2018-10-29 03:02:18,504 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY_SERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:02:19,728 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_THRIFTSERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:02:20,630 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY2_SERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:02:22,907 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:02:25,755 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SQOOP of service SQOOP of cluster Sandbox to the queue. INFO 2018-10-29 03:02:26,972 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SUPERVISOR of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:02:30,950 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component STORM_UI_SERVER of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:02:31,846 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component TEZ_CLIENT of service TEZ of cluster Sandbox to the queue. INFO 2018-10-29 03:02:32,764 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NODEMANAGER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:02:34,001 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component YARN_CLIENT of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:02:35,211 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:02:52,352 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:02:52,353 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:03:03,726 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:03:04,382 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:03:04,439 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:03:04,592 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:03:04,593 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:03:04,593 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:03:05,289 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.88 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1341284, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "0", "fqdn": "sandbox.hortonworks.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25927624", "used": "16486024", "percent": "39%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25927624", "used": "16486024", "percent": "39%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "701", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540782185182, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540782184604, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:04:29,472 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:04:29,473 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:04:29,550 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540782185719, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:04:29,550 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:04:29,551 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:04:29,551 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:04:29,551 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:04:29,552 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:04:29,561 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:04:30,446 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:04:32,904 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_COLLECTOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:35,455 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_CLIENT of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:40,548 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_CLIENT of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:04:44,040 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_SERVER of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:04:44,948 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FLUME_HANDLER of service FLUME of cluster Sandbox to the queue. INFO 2018-10-29 03:04:47,479 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_MASTER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:04:50,184 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NFS_GATEWAY of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:51,690 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component DATANODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:52,568 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:53,467 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HDFS_CLIENT of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:04:54,761 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NAMENODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:05:01,071 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component KAFKA_BROKER of service KAFKA of cluster Sandbox to the queue. INFO 2018-10-29 03:05:05,031 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component MAPREDUCE2_CLIENT of service MAPREDUCE2 of cluster Sandbox to the queue. INFO 2018-10-29 03:05:10,174 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_ADMIN of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:05:11,366 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_TAGSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:05:13,979 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SLIDER of service SLIDER of cluster Sandbox to the queue. INFO 2018-10-29 03:05:15,332 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_CLIENT of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:05:19,571 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_THRIFTSERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:05:21,078 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY2_SERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:05:21,961 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_CLIENT of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:05:22,870 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:05:24,241 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_JOBHISTORYSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:05:27,114 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SUPERVISOR of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:05:31,238 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component STORM_UI_SERVER of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:05:37,131 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:05:38,439 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RESOURCEMANAGER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:05:39,937 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZEPPELIN_MASTER of service ZEPPELIN of cluster Sandbox to the queue. INFO 2018-10-29 03:05:40,905 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:05:41,825 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_CLIENT of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:05:43,492 AlertSchedulerHandler.py:290 - [AlertScheduler] Caching cluster Sandbox with alert hash e13d86c3bc7e07797530241453dc0c0d INFO 2018-10-29 03:05:43,651 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_process with UUID c04da460-8755-4138-b797-47ab11ebdfaf is disabled and will not be scheduled INFO 2018-10-29 03:05:43,652 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_webui with UUID a03860f2-af1b-409e-9173-9156c5081eb3 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,652 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_rpc_latency with UUID 5e192c4b-c0a4-461b-a255-5a9bfbfef1cd is disabled and will not be scheduled INFO 2018-10-29 03:05:43,653 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_cpu with UUID 0b2eece1-d694-4993-b218-655ff0a07929 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,660 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK_JOBHISTORYSERVER_PROCESS with UUID 64990312-4d14-4d67-bf79-7f6f93df05da is disabled and will not be scheduled INFO 2018-10-29 03:05:43,665 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert infra_solr with UUID 1fe14c02-bfcb-4e50-8d69-742c9897716a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,665 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_process with UUID 240dca42-cdeb-4660-b394-f15e071afa84 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,666 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_password_check with UUID 603d8ad5-75c1-4a27-83cd-0ae0bcbb1036 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,666 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_usersync_process with UUID ad898e82-9a6e-4f17-8279-010983b64215 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,667 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK2_JOBHISTORYSERVER_PROCESS with UUID 20c51b52-15aa-4970-9d14-ee3a1622fd2f is disabled and will not be scheduled INFO 2018-10-29 03:05:43,667 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert metadata_server_webui with UUID bf2aca96-7621-47a9-82ba-c5c7e591736f is disabled and will not be scheduled INFO 2018-10-29 03:05:43,672 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert flume_agent_status with UUID d6a8fac2-c51f-4c7e-9b4c-7ad47a5d03d7 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,672 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zookeeper_server_process with UUID e8aeceea-22ab-4880-acf0-cd084f32d0da is disabled and will not be scheduled INFO 2018-10-29 03:05:43,672 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_monitor_process with UUID d4fa3373-ea4d-47db-9d86-4620568f1394 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,673 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_process with UUID a796e0c1-8b9a-4e11-8984-a12e8a9b78fb is disabled and will not be scheduled INFO 2018-10-29 03:05:43,673 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_process with UUID 7ef7b733-1401-4afe-a360-901304368766 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,673 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_cpu with UUID d4472543-c949-4a35-a2d2-f6eea346b922 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,674 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_autostart with UUID 73ffd827-9119-452f-9d9d-d92b1785d195 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,674 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert grafana_webui with UUID 19c482da-a3c8-4914-bf23-75b9e4ea57dc is disabled and will not be scheduled INFO 2018-10-29 03:05:43,675 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_webui with UUID 9f608c19-bc48-4c7e-b00b-21cdac856f6d is disabled and will not be scheduled INFO 2018-10-29 03:05:43,675 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_supervisor_process with UUID 94856057-a19b-4d97-a548-51a136a44223 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,676 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_nimbus_process with UUID 08207d61-5aec-48bd-869a-8bc4d4fcd3a5 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,676 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_drpc_server with UUID 83a26896-e425-4a67-9b1b-fc8b9bdd032a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,677 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_webui with UUID 79e1191f-4cc5-49a4-858b-0a3404208b94 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,677 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_status with UUID b6836f07-f5f3-4e19-90a2-112b84cc05dd is disabled and will not be scheduled INFO 2018-10-29 03:05:43,687 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_cpu with UUID fa67564f-f463-402f-a1d2-2fa395b03f99 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,687 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert secondary_namenode_process with UUID 6ed52c5d-d900-4327-9bc2-138ee00a5735 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,688 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_pending_deletion_blocks with UUID 6bde7b44-4f70-471f-8d2e-7c83dd80de2b is disabled and will not be scheduled INFO 2018-10-29 03:05:43,688 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_daily with UUID 11cbc1bd-05ae-4198-ad37-1488f978528a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,689 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_ha_health with UUID 3abf9692-1adc-4a8c-b555-2b07af91cb51 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,689 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_heap_usage with UUID 1e8eea58-1d72-482c-a238-ef6db55c7b44 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,689 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_health_summary with UUID ad46e3d1-bb0a-4640-8c92-947ffb4343ed is disabled and will not be scheduled INFO 2018-10-29 03:05:43,690 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_unmounted_data_dir with UUID 5eff1c2f-5c21-409d-a748-95ebc12b65ed is disabled and will not be scheduled INFO 2018-10-29 03:05:43,690 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_daily with UUID 52157e50-ced5-4515-8d3d-4fc6527031fc is disabled and will not be scheduled INFO 2018-10-29 03:05:43,690 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_process with UUID a0e66a6d-dcec-4f62-9085-abd672b487b1 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,691 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_daily with UUID b2c4fadb-b5bd-42fa-8903-0d2572d445d4 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,692 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_blocks_health with UUID d9b77fc7-0f2f-4471-a0f9-85f429a32296 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,692 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_webui with UUID e4895dfa-f6ce-48bd-bc9e-884e63a8c996 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,692 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_webui with UUID 3760c848-0090-4aa8-ba71-f4154ec8d29a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,693 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_storage with UUID 9e1a3236-6123-47ec-b9c0-fe4f9492d676 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,693 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_hourly with UUID 1de98065-b774-4678-9f58-4b4a7b1052d7 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,694 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nfsgateway_process with UUID 4840a7ba-2563-4d51-8779-c6c773e715d4 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,695 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_daily with UUID ed7e8afb-5537-48b4-82c0-3ca687f0af35 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,695 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_hourly with UUID 1934f3b1-60d2-42cb-a26f-986625718162 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,696 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_daily with UUID 10849f29-8045-4ad2-96fd-03696df045e0 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,696 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert upgrade_finalized_state with UUID e7ccee0c-d9da-4a81-bf85-5465aaf01a8a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,697 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_hourly with UUID 323cf92d-7bab-4a26-a22b-2ea53015aa0a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,698 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_weekly with UUID 6e553a1c-f113-4c78-8e20-609b6c83b7f5 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,699 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_directory_status with UUID 6c67bac7-5974-443e-93c8-d46c9c01ea0d is disabled and will not be scheduled INFO 2018-10-29 03:05:43,712 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_hourly with UUID 008b1529-a959-45de-b38b-6bb9ec6a1dcb is disabled and will not be scheduled INFO 2018-10-29 03:05:43,713 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_weekly with UUID a3afa569-6f22-4e52-82cf-cea506414e8d is disabled and will not be scheduled INFO 2018-10-29 03:05:43,713 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_capacity_utilization with UUID 732e7066-2433-49b2-b15d-a1573a304adc is disabled and will not be scheduled INFO 2018-10-29 03:05:43,713 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_rpc_latency with UUID 94957f0d-9cf0-433b-9b11-19674aa59bd7 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,714 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_last_checkpoint with UUID e9b8a11a-e2f2-4e17-9d0f-28dee96d4b33 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,714 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_daily with UUID 8249d880-7176-4d61-a26e-f75ee507511c is disabled and will not be scheduled INFO 2018-10-29 03:05:43,715 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_webui with UUID 61b937eb-6483-48f3-b8c0-71efcba52e85 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,715 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_process with UUID 05817f4d-0e8e-4d2c-984c-9f9a13257eb6 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,727 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_health with UUID 27535dc0-51c4-4716-83d0-4ccb5fe86c70 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,728 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_webui with UUID 511da394-29e1-4aa5-a5b6-be5b93354f10 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,728 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_cpu with UUID bba36132-669d-4198-ba2a-ff67e31d695e is disabled and will not be scheduled INFO 2018-10-29 03:05:43,728 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_webui with UUID d1705ebc-df53-4dcb-8961-ce95b6de1ed5 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,729 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_rpc_latency with UUID 22cb0500-1321-478b-84fa-5300723aac5c is disabled and will not be scheduled INFO 2018-10-29 03:05:43,729 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nodemanager_health_summary with UUID 2f59de42-83f8-41ce-a9cc-b989ed54f7a6 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,730 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_app_timeline_server_webui with UUID 63aa90d0-c786-4bb2-a14b-f83ccb66bb96 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,731 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zeppelin_server_status with UUID 366c4af1-d19a-48c2-a456-685fdf720e10 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,731 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_process with UUID 8cb278a0-7f9f-4bb9-a4f5-c881ee3a3efd is disabled and will not be scheduled INFO 2018-10-29 03:05:43,732 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_cpu with UUID 40a8dfee-fed8-4ade-914a-adbeefa76eba is disabled and will not be scheduled INFO 2018-10-29 03:05:43,732 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_regionserver_process with UUID a4ff32b7-0a90-4e69-95c0-a5e2ce07e751 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,733 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert knox_gateway_process with UUID 6b93cb6c-b38a-440e-8b98-906fad0365ad is disabled and will not be scheduled INFO 2018-10-29 03:05:43,733 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_metastore_process with UUID efd35947-2ed7-4407-8152-ffc2e84ba94a is disabled and will not be scheduled INFO 2018-10-29 03:05:43,734 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_server_process with UUID e76e8d09-a6d7-45aa-97ed-ea516947d9f2 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,734 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_webhcat_server_status with UUID 801c56b9-d65c-43da-a214-0a7835bdea65 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,738 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert kafka_broker_process with UUID 1704301f-7c12-4d93-a4fa-b6810dd2b324 is disabled and will not be scheduled INFO 2018-10-29 03:05:43,738 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ambari_agent_disk_usage with UUID 9c128bc6-d731-42cb-b139-624db29c4e7b is disabled and will not be scheduled INFO 2018-10-29 03:05:43,739 AlertSchedulerHandler.py:230 - [AlertScheduler] Reschedule Summary: 74 rescheduled, 0 unscheduled INFO 2018-10-29 03:05:43,742 Controller.py:512 - Registration response from sandbox.hortonworks.com was OK INFO 2018-10-29 03:05:43,743 Controller.py:517 - Resetting ActionQueue... INFO 2018-10-29 03:05:53,746 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:05:53,747 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:06:05,307 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:06:05,970 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:06:06,040 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:06:06,208 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:06:06,208 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:06:06,209 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:06:06,919 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.88 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1341284, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "0", "fqdn": "sandbox.hortonworks.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25927624", "used": "16486024", "percent": "39%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25927624", "used": "16486024", "percent": "39%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "701", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540782366813, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540782366219, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:07:35,837 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:07:35,838 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:07:35,908 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540782367382, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:07:35,909 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:07:35,910 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:07:35,910 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:07:35,911 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:07:35,911 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:07:35,922 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:07:36,917 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:07:38,462 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_MONITOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:39,422 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_COLLECTOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:40,408 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_GRAFANA of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:43,440 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_SERVER of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:46,406 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_SERVER of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:07:51,999 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_REGIONSERVER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:07:53,426 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NFS_GATEWAY of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:54,961 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component DATANODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:07:55,930 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SECONDARY_NAMENODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:08:08,400 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HISTORYSERVER of service MAPREDUCE2 of cluster Sandbox to the queue. INFO 2018-10-29 03:08:09,748 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component MAPREDUCE2_CLIENT of service MAPREDUCE2 of cluster Sandbox to the queue. INFO 2018-10-29 03:08:17,749 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_TAGSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:08:19,554 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_USERSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:08:26,228 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY_SERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:08:27,305 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_THRIFTSERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:08:31,663 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:08:34,904 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SQOOP of service SQOOP of cluster Sandbox to the queue. INFO 2018-10-29 03:08:35,868 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SUPERVISOR of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:08:36,812 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:08:42,917 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NODEMANAGER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:08:45,718 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:08:48,824 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZEPPELIN_MASTER of service ZEPPELIN of cluster Sandbox to the queue. INFO 2018-10-29 03:08:49,718 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:08:50,627 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_CLIENT of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:08:52,255 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_process with UUID c04da460-8755-4138-b797-47ab11ebdfaf is disabled and will not be scheduled INFO 2018-10-29 03:08:52,256 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_webui with UUID a03860f2-af1b-409e-9173-9156c5081eb3 is disabled and will not be scheduled INFO 2018-10-29 03:08:52,257 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_rpc_latency with UUID 5e192c4b-c0a4-461b-a255-5a9bfbfef1cd is disabled and will not be scheduled INFO 2018-10-29 03:08:52,257 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_cpu with UUID 0b2eece1-d694-4993-b218-655ff0a07929 is disabled and will not be scheduled INFO 2018-10-29 03:09:02,277 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:09:02,277 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:09:14,499 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:09:15,186 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:09:15,253 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:09:15,431 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:09:15,432 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:09:15,432 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:09:16,149 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.88 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1341284, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "0", "fqdn": "sandbox.hortonworks.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25927624", "used": "16486024", "percent": "39%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25927624", "used": "16486024", "percent": "39%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "701", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540782556037, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540782555441, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:10:14,025 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:14,025 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:14,025 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:14,085 HeartbeatHandlers.py:84 - Ambari-agent received 15 signal, stopping... INFO 2018-10-29 03:10:42,247 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:10:42,248 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:10:42,327 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540782556600, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:10:42,328 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:10:42,328 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:10:42,329 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:10:42,329 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:10:42,330 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:10:42,341 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:10:43,381 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:10:44,617 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_MONITOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:10:44,700 main.py:286 - Agent not going to die gracefully, going to execute kill -9 INFO 2018-10-29 03:10:44,763 ExitHelper.py:56 - Performing cleanup before exiting... INFO 2018-10-29 03:10:46,908 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:46,909 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:46,909 main.py:145 - loglevel=logging.INFO INFO 2018-10-29 03:10:46,915 DataCleaner.py:39 - Data cleanup thread started INFO 2018-10-29 03:10:46,924 DataCleaner.py:120 - Data cleanup started INFO 2018-10-29 03:10:46,936 DataCleaner.py:122 - Data cleanup finished INFO 2018-10-29 03:10:47,313 PingPortListener.py:50 - Ping port listener started on port: 8670 INFO 2018-10-29 03:10:47,411 main.py:436 - Connecting to Ambari server at https://sandbox.hortonworks.com:8440 (172.17.0.2) INFO 2018-10-29 03:10:47,411 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/ca INFO 2018-10-29 03:10:47,594 main.py:446 - Connected to Ambari server sandbox.hortonworks.com INFO 2018-10-29 03:10:47,644 threadpool.py:58 - Started thread pool with 3 core threads and 20 maximum threads INFO 2018-10-29 03:10:47,656 AlertSchedulerHandler.py:290 - [AlertScheduler] Caching cluster Sandbox with alert hash e13d86c3bc7e07797530241453dc0c0d INFO 2018-10-29 03:10:47,775 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_process with UUID c04da460-8755-4138-b797-47ab11ebdfaf is disabled and will not be scheduled INFO 2018-10-29 03:10:47,776 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_webui with UUID a03860f2-af1b-409e-9173-9156c5081eb3 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,776 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_rpc_latency with UUID 5e192c4b-c0a4-461b-a255-5a9bfbfef1cd is disabled and will not be scheduled INFO 2018-10-29 03:10:47,776 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_cpu with UUID 0b2eece1-d694-4993-b218-655ff0a07929 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,776 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK_JOBHISTORYSERVER_PROCESS with UUID 64990312-4d14-4d67-bf79-7f6f93df05da is disabled and will not be scheduled INFO 2018-10-29 03:10:47,777 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert infra_solr with UUID 1fe14c02-bfcb-4e50-8d69-742c9897716a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,778 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_process with UUID 240dca42-cdeb-4660-b394-f15e071afa84 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,778 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_password_check with UUID 603d8ad5-75c1-4a27-83cd-0ae0bcbb1036 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,778 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_usersync_process with UUID ad898e82-9a6e-4f17-8279-010983b64215 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,778 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK2_JOBHISTORYSERVER_PROCESS with UUID 20c51b52-15aa-4970-9d14-ee3a1622fd2f is disabled and will not be scheduled INFO 2018-10-29 03:10:47,779 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert metadata_server_webui with UUID bf2aca96-7621-47a9-82ba-c5c7e591736f is disabled and will not be scheduled INFO 2018-10-29 03:10:47,779 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert flume_agent_status with UUID d6a8fac2-c51f-4c7e-9b4c-7ad47a5d03d7 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,779 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zookeeper_server_process with UUID e8aeceea-22ab-4880-acf0-cd084f32d0da is disabled and will not be scheduled INFO 2018-10-29 03:10:47,780 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_monitor_process with UUID d4fa3373-ea4d-47db-9d86-4620568f1394 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,780 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_process with UUID a796e0c1-8b9a-4e11-8984-a12e8a9b78fb is disabled and will not be scheduled INFO 2018-10-29 03:10:47,780 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_process with UUID 7ef7b733-1401-4afe-a360-901304368766 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,780 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_cpu with UUID d4472543-c949-4a35-a2d2-f6eea346b922 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,781 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_autostart with UUID 73ffd827-9119-452f-9d9d-d92b1785d195 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,781 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert grafana_webui with UUID 19c482da-a3c8-4914-bf23-75b9e4ea57dc is disabled and will not be scheduled INFO 2018-10-29 03:10:47,781 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_webui with UUID 9f608c19-bc48-4c7e-b00b-21cdac856f6d is disabled and will not be scheduled INFO 2018-10-29 03:10:47,782 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_supervisor_process with UUID 94856057-a19b-4d97-a548-51a136a44223 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,782 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_nimbus_process with UUID 08207d61-5aec-48bd-869a-8bc4d4fcd3a5 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,782 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_drpc_server with UUID 83a26896-e425-4a67-9b1b-fc8b9bdd032a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,782 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_webui with UUID 79e1191f-4cc5-49a4-858b-0a3404208b94 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,783 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_status with UUID b6836f07-f5f3-4e19-90a2-112b84cc05dd is disabled and will not be scheduled INFO 2018-10-29 03:10:47,783 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_cpu with UUID fa67564f-f463-402f-a1d2-2fa395b03f99 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,783 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert secondary_namenode_process with UUID 6ed52c5d-d900-4327-9bc2-138ee00a5735 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,784 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_pending_deletion_blocks with UUID 6bde7b44-4f70-471f-8d2e-7c83dd80de2b is disabled and will not be scheduled INFO 2018-10-29 03:10:47,784 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_daily with UUID 11cbc1bd-05ae-4198-ad37-1488f978528a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,784 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_ha_health with UUID 3abf9692-1adc-4a8c-b555-2b07af91cb51 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,784 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_heap_usage with UUID 1e8eea58-1d72-482c-a238-ef6db55c7b44 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,785 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_health_summary with UUID ad46e3d1-bb0a-4640-8c92-947ffb4343ed is disabled and will not be scheduled INFO 2018-10-29 03:10:47,785 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_unmounted_data_dir with UUID 5eff1c2f-5c21-409d-a748-95ebc12b65ed is disabled and will not be scheduled INFO 2018-10-29 03:10:47,785 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_daily with UUID 52157e50-ced5-4515-8d3d-4fc6527031fc is disabled and will not be scheduled INFO 2018-10-29 03:10:47,786 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_process with UUID a0e66a6d-dcec-4f62-9085-abd672b487b1 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,786 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_daily with UUID b2c4fadb-b5bd-42fa-8903-0d2572d445d4 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,786 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_blocks_health with UUID d9b77fc7-0f2f-4471-a0f9-85f429a32296 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,786 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_webui with UUID e4895dfa-f6ce-48bd-bc9e-884e63a8c996 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,787 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_webui with UUID 3760c848-0090-4aa8-ba71-f4154ec8d29a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,787 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_storage with UUID 9e1a3236-6123-47ec-b9c0-fe4f9492d676 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,787 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_hourly with UUID 1de98065-b774-4678-9f58-4b4a7b1052d7 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,788 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nfsgateway_process with UUID 4840a7ba-2563-4d51-8779-c6c773e715d4 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,788 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_daily with UUID ed7e8afb-5537-48b4-82c0-3ca687f0af35 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,788 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_hourly with UUID 1934f3b1-60d2-42cb-a26f-986625718162 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,788 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_daily with UUID 10849f29-8045-4ad2-96fd-03696df045e0 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,789 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert upgrade_finalized_state with UUID e7ccee0c-d9da-4a81-bf85-5465aaf01a8a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,789 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_hourly with UUID 323cf92d-7bab-4a26-a22b-2ea53015aa0a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,789 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_weekly with UUID 6e553a1c-f113-4c78-8e20-609b6c83b7f5 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,790 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_directory_status with UUID 6c67bac7-5974-443e-93c8-d46c9c01ea0d is disabled and will not be scheduled INFO 2018-10-29 03:10:47,790 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_hourly with UUID 008b1529-a959-45de-b38b-6bb9ec6a1dcb is disabled and will not be scheduled INFO 2018-10-29 03:10:47,790 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_weekly with UUID a3afa569-6f22-4e52-82cf-cea506414e8d is disabled and will not be scheduled INFO 2018-10-29 03:10:47,790 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_capacity_utilization with UUID 732e7066-2433-49b2-b15d-a1573a304adc is disabled and will not be scheduled INFO 2018-10-29 03:10:47,791 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_rpc_latency with UUID 94957f0d-9cf0-433b-9b11-19674aa59bd7 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,793 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_last_checkpoint with UUID e9b8a11a-e2f2-4e17-9d0f-28dee96d4b33 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,793 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_daily with UUID 8249d880-7176-4d61-a26e-f75ee507511c is disabled and will not be scheduled INFO 2018-10-29 03:10:47,793 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_webui with UUID 61b937eb-6483-48f3-b8c0-71efcba52e85 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,794 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_process with UUID 05817f4d-0e8e-4d2c-984c-9f9a13257eb6 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,794 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_health with UUID 27535dc0-51c4-4716-83d0-4ccb5fe86c70 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,795 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_webui with UUID 511da394-29e1-4aa5-a5b6-be5b93354f10 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,796 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_cpu with UUID bba36132-669d-4198-ba2a-ff67e31d695e is disabled and will not be scheduled INFO 2018-10-29 03:10:47,796 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_webui with UUID d1705ebc-df53-4dcb-8961-ce95b6de1ed5 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,796 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_rpc_latency with UUID 22cb0500-1321-478b-84fa-5300723aac5c is disabled and will not be scheduled INFO 2018-10-29 03:10:47,797 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nodemanager_health_summary with UUID 2f59de42-83f8-41ce-a9cc-b989ed54f7a6 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,797 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_app_timeline_server_webui with UUID 63aa90d0-c786-4bb2-a14b-f83ccb66bb96 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,797 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zeppelin_server_status with UUID 366c4af1-d19a-48c2-a456-685fdf720e10 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,797 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_process with UUID 8cb278a0-7f9f-4bb9-a4f5-c881ee3a3efd is disabled and will not be scheduled INFO 2018-10-29 03:10:47,798 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_cpu with UUID 40a8dfee-fed8-4ade-914a-adbeefa76eba is disabled and will not be scheduled INFO 2018-10-29 03:10:47,798 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_regionserver_process with UUID a4ff32b7-0a90-4e69-95c0-a5e2ce07e751 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,798 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert knox_gateway_process with UUID 6b93cb6c-b38a-440e-8b98-906fad0365ad is disabled and will not be scheduled INFO 2018-10-29 03:10:47,799 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_metastore_process with UUID efd35947-2ed7-4407-8152-ffc2e84ba94a is disabled and will not be scheduled INFO 2018-10-29 03:10:47,799 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_server_process with UUID e76e8d09-a6d7-45aa-97ed-ea516947d9f2 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,799 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_webhcat_server_status with UUID 801c56b9-d65c-43da-a214-0a7835bdea65 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,799 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert kafka_broker_process with UUID 1704301f-7c12-4d93-a4fa-b6810dd2b324 is disabled and will not be scheduled INFO 2018-10-29 03:10:47,800 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ambari_agent_disk_usage with UUID 9c128bc6-d731-42cb-b139-624db29c4e7b is disabled and will not be scheduled INFO 2018-10-29 03:10:47,800 AlertSchedulerHandler.py:175 - [AlertScheduler] Starting ; currently running: False INFO 2018-10-29 03:10:47,909 hostname.py:98 - Read public hostname 'sandbox.hortonworks.com' using socket.getfqdn() INFO 2018-10-29 03:10:47,969 Hardware.py:174 - Some mount points were ignored: /dev, /sys/fs/cgroup, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:10:48,049 Facter.py:202 - Directory: '/etc/resource_overrides' does not exist - it won't be used for gathering system resources. INFO 2018-10-29 03:10:48,772 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.87 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1546824, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "14", "fqdn": "sandbox.hortonworks.com", "id": "maria_dev", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25746000", "used": "16667648", "percent": "40%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25746000", "used": "16667648", "percent": "40%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "52785", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540782648673, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540782648098, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:10:48,776 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info INFO 2018-10-29 03:10:48,974 security.py:93 - SSL Connect being called.. connecting to the server INFO 2018-10-29 03:10:49,189 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server. INFO 2018-10-29 03:12:15,167 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:12:15,167 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:12:15,235 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540782649631, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:12:15,235 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:12:15,236 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:12:15,236 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:12:15,237 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:12:15,237 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:12:15,243 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for INFRA_SOLR INFO 2018-10-29 03:12:15,244 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for INFRA_SOLR_CLIENT INFO 2018-10-29 03:12:15,244 RecoveryManager.py:204 - New status, desired status is set to STARTED for METRICS_MONITOR INFO 2018-10-29 03:12:15,244 RecoveryManager.py:204 - New status, desired status is set to STARTED for METRICS_COLLECTOR INFO 2018-10-29 03:12:15,245 RecoveryManager.py:204 - New status, desired status is set to STARTED for METRICS_GRAFANA INFO 2018-10-29 03:12:15,246 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for ATLAS_CLIENT INFO 2018-10-29 03:12:15,247 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for ATLAS_SERVER INFO 2018-10-29 03:12:15,248 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for FALCON_CLIENT INFO 2018-10-29 03:12:15,248 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for FALCON_SERVER INFO 2018-10-29 03:12:15,249 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for FLUME_HANDLER INFO 2018-10-29 03:12:15,249 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HBASE_CLIENT INFO 2018-10-29 03:12:15,249 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HBASE_MASTER INFO 2018-10-29 03:12:15,250 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HBASE_REGIONSERVER INFO 2018-10-29 03:12:15,250 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for NFS_GATEWAY INFO 2018-10-29 03:12:15,250 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for DATANODE INFO 2018-10-29 03:12:15,251 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SECONDARY_NAMENODE INFO 2018-10-29 03:12:15,251 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HDFS_CLIENT INFO 2018-10-29 03:12:15,252 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for NAMENODE INFO 2018-10-29 03:12:15,252 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HIVE_METASTORE INFO 2018-10-29 03:12:15,252 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HIVE_SERVER INFO 2018-10-29 03:12:15,253 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HIVE_CLIENT INFO 2018-10-29 03:12:15,253 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for WEBHCAT_SERVER INFO 2018-10-29 03:12:15,253 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for KAFKA_BROKER INFO 2018-10-29 03:12:15,254 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for KNOX_GATEWAY INFO 2018-10-29 03:12:15,254 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for HISTORYSERVER INFO 2018-10-29 03:12:15,255 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for MAPREDUCE2_CLIENT INFO 2018-10-29 03:12:15,256 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for OOZIE_SERVER INFO 2018-10-29 03:12:15,256 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for OOZIE_CLIENT INFO 2018-10-29 03:12:15,257 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for PIG INFO 2018-10-29 03:12:15,257 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for RANGER_ADMIN INFO 2018-10-29 03:12:15,268 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for RANGER_TAGSYNC INFO 2018-10-29 03:12:15,268 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for RANGER_USERSYNC INFO 2018-10-29 03:12:15,270 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SLIDER INFO 2018-10-29 03:12:15,270 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK_CLIENT INFO 2018-10-29 03:12:15,271 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK_JOBHISTORYSERVER INFO 2018-10-29 03:12:15,272 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for LIVY_SERVER INFO 2018-10-29 03:12:15,272 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK_THRIFTSERVER INFO 2018-10-29 03:12:15,272 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for LIVY2_SERVER INFO 2018-10-29 03:12:15,273 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK2_CLIENT INFO 2018-10-29 03:12:15,274 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK2_THRIFTSERVER INFO 2018-10-29 03:12:15,274 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SPARK2_JOBHISTORYSERVER INFO 2018-10-29 03:12:15,275 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SQOOP INFO 2018-10-29 03:12:15,275 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for SUPERVISOR INFO 2018-10-29 03:12:15,275 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for NIMBUS INFO 2018-10-29 03:12:15,276 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for DRPC_SERVER INFO 2018-10-29 03:12:15,276 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for STORM_UI_SERVER INFO 2018-10-29 03:12:15,277 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for TEZ_CLIENT INFO 2018-10-29 03:12:15,277 RecoveryManager.py:204 - New status, desired status is set to STARTED for NODEMANAGER INFO 2018-10-29 03:12:15,277 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for YARN_CLIENT INFO 2018-10-29 03:12:15,278 RecoveryManager.py:204 - New status, desired status is set to STARTED for APP_TIMELINE_SERVER INFO 2018-10-29 03:12:15,279 RecoveryManager.py:204 - New status, desired status is set to STARTED for RESOURCEMANAGER INFO 2018-10-29 03:12:15,279 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for ZEPPELIN_MASTER INFO 2018-10-29 03:12:15,279 RecoveryManager.py:204 - New status, desired status is set to STARTED for ZOOKEEPER_SERVER INFO 2018-10-29 03:12:15,280 RecoveryManager.py:204 - New status, desired status is set to INSTALLED for ZOOKEEPER_CLIENT INFO 2018-10-29 03:12:15,280 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:12:16,158 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:12:17,574 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_MONITOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:19,078 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_COLLECTOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:19,990 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_GRAFANA of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:20,926 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_CLIENT of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:22,217 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_SERVER of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:26,412 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FLUME_HANDLER of service FLUME of cluster Sandbox to the queue. INFO 2018-10-29 03:12:27,540 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_CLIENT of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:12:28,926 RecoveryManager.py:185 - current status is set to STARTED for FLUME_HANDLER INFO 2018-10-29 03:12:31,657 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NFS_GATEWAY of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:33,122 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component DATANODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:36,029 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HDFS_CLIENT of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:37,476 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NAMENODE of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:12:38,375 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HIVE_METASTORE of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:12:39,224 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HIVE_SERVER of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:12:42,510 RecoveryManager.py:185 - current status is set to STARTED for HIVE_METASTORE INFO 2018-10-29 03:12:44,010 RecoveryManager.py:185 - current status is set to STARTED for HIVE_SERVER INFO 2018-10-29 03:12:45,173 RecoveryManager.py:185 - current status is set to INSTALLED for HIVE_CLIENT INFO 2018-10-29 03:12:46,415 RecoveryManager.py:185 - current status is set to STARTED for WEBHCAT_SERVER INFO 2018-10-29 03:12:49,699 RecoveryManager.py:185 - current status is set to STARTED for HISTORYSERVER INFO 2018-10-29 03:12:50,021 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component OOZIE_CLIENT of service OOZIE of cluster Sandbox to the queue. INFO 2018-10-29 03:12:50,849 RecoveryManager.py:185 - current status is set to INSTALLED for MAPREDUCE2_CLIENT INFO 2018-10-29 03:12:52,228 RecoveryManager.py:185 - current status is set to STARTED for OOZIE_SERVER INFO 2018-10-29 03:12:53,011 RecoveryManager.py:185 - current status is set to INSTALLED for OOZIE_CLIENT INFO 2018-10-29 03:12:54,347 RecoveryManager.py:185 - current status is set to INSTALLED for PIG INFO 2018-10-29 03:12:55,358 RecoveryManager.py:185 - current status is set to STARTED for RANGER_ADMIN INFO 2018-10-29 03:12:55,396 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_USERSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:12:56,569 RecoveryManager.py:185 - current status is set to INSTALLED for RANGER_TAGSYNC INFO 2018-10-29 03:12:56,719 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SLIDER of service SLIDER of cluster Sandbox to the queue. INFO 2018-10-29 03:12:57,833 RecoveryManager.py:185 - current status is set to STARTED for RANGER_USERSYNC INFO 2018-10-29 03:12:58,142 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_CLIENT of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:12:59,193 RecoveryManager.py:185 - current status is set to INSTALLED for SLIDER INFO 2018-10-29 03:12:59,514 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_JOBHISTORYSERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:13:00,885 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY_SERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:13:03,495 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY2_SERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:13:04,692 RecoveryManager.py:185 - current status is set to STARTED for LIVY2_SERVER INFO 2018-10-29 03:13:04,783 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_CLIENT of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:13:05,935 RecoveryManager.py:185 - current status is set to INSTALLED for SPARK2_CLIENT INFO 2018-10-29 03:13:06,060 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:13:07,000 RecoveryManager.py:185 - current status is set to STARTED for SPARK2_THRIFTSERVER INFO 2018-10-29 03:13:07,448 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_JOBHISTORYSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:13:08,107 RecoveryManager.py:185 - current status is set to STARTED for SPARK2_JOBHISTORYSERVER INFO 2018-10-29 03:13:08,687 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SQOOP of service SQOOP of cluster Sandbox to the queue. INFO 2018-10-29 03:13:09,610 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SUPERVISOR of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:13:10,514 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:13:11,137 RecoveryManager.py:185 - current status is set to INSTALLED for SQOOP INFO 2018-10-29 03:13:15,976 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NODEMANAGER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:13:17,282 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component YARN_CLIENT of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:13:18,302 RecoveryManager.py:185 - current status is set to INSTALLED for TEZ_CLIENT INFO 2018-10-29 03:13:18,531 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component APP_TIMELINE_SERVER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:13:19,808 RecoveryManager.py:185 - current status is set to STARTED for NODEMANAGER INFO 2018-10-29 03:13:21,249 RecoveryManager.py:185 - current status is set to INSTALLED for YARN_CLIENT INFO 2018-10-29 03:13:22,376 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:13:22,713 RecoveryManager.py:185 - current status is set to STARTED for APP_TIMELINE_SERVER INFO 2018-10-29 03:13:23,937 RecoveryManager.py:185 - current status is set to STARTED for RESOURCEMANAGER INFO 2018-10-29 03:13:25,105 RecoveryManager.py:185 - current status is set to STARTED for ZEPPELIN_MASTER INFO 2018-10-29 03:13:25,310 AlertSchedulerHandler.py:290 - [AlertScheduler] Caching cluster Sandbox with alert hash e13d86c3bc7e07797530241453dc0c0d INFO 2018-10-29 03:13:25,447 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_process with UUID c04da460-8755-4138-b797-47ab11ebdfaf is disabled and will not be scheduled INFO 2018-10-29 03:13:25,448 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_webui with UUID a03860f2-af1b-409e-9173-9156c5081eb3 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,448 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_rpc_latency with UUID 5e192c4b-c0a4-461b-a255-5a9bfbfef1cd is disabled and will not be scheduled INFO 2018-10-29 03:13:25,449 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert mapreduce_history_server_cpu with UUID 0b2eece1-d694-4993-b218-655ff0a07929 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,449 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK_JOBHISTORYSERVER_PROCESS with UUID 64990312-4d14-4d67-bf79-7f6f93df05da is disabled and will not be scheduled INFO 2018-10-29 03:13:25,449 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert infra_solr with UUID 1fe14c02-bfcb-4e50-8d69-742c9897716a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,450 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_process with UUID 240dca42-cdeb-4660-b394-f15e071afa84 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,450 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_admin_password_check with UUID 603d8ad5-75c1-4a27-83cd-0ae0bcbb1036 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,450 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ranger_usersync_process with UUID ad898e82-9a6e-4f17-8279-010983b64215 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,451 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert SPARK2_JOBHISTORYSERVER_PROCESS with UUID 20c51b52-15aa-4970-9d14-ee3a1622fd2f is disabled and will not be scheduled INFO 2018-10-29 03:13:25,451 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert metadata_server_webui with UUID bf2aca96-7621-47a9-82ba-c5c7e591736f is disabled and will not be scheduled INFO 2018-10-29 03:13:25,451 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert flume_agent_status with UUID d6a8fac2-c51f-4c7e-9b4c-7ad47a5d03d7 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,452 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zookeeper_server_process with UUID e8aeceea-22ab-4880-acf0-cd084f32d0da is disabled and will not be scheduled INFO 2018-10-29 03:13:25,452 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_monitor_process with UUID d4fa3373-ea4d-47db-9d86-4620568f1394 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,453 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_process with UUID a796e0c1-8b9a-4e11-8984-a12e8a9b78fb is disabled and will not be scheduled INFO 2018-10-29 03:13:25,453 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_process with UUID 7ef7b733-1401-4afe-a360-901304368766 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,453 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_hbase_master_cpu with UUID d4472543-c949-4a35-a2d2-f6eea346b922 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,453 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ams_metrics_collector_autostart with UUID 73ffd827-9119-452f-9d9d-d92b1785d195 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,454 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert grafana_webui with UUID 19c482da-a3c8-4914-bf23-75b9e4ea57dc is disabled and will not be scheduled INFO 2018-10-29 03:13:25,454 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_webui with UUID 9f608c19-bc48-4c7e-b00b-21cdac856f6d is disabled and will not be scheduled INFO 2018-10-29 03:13:25,454 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_supervisor_process with UUID 94856057-a19b-4d97-a548-51a136a44223 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,455 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_nimbus_process with UUID 08207d61-5aec-48bd-869a-8bc4d4fcd3a5 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,455 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert storm_drpc_server with UUID 83a26896-e425-4a67-9b1b-fc8b9bdd032a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,455 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_webui with UUID 79e1191f-4cc5-49a4-858b-0a3404208b94 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,456 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert oozie_server_status with UUID b6836f07-f5f3-4e19-90a2-112b84cc05dd is disabled and will not be scheduled INFO 2018-10-29 03:13:25,456 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_cpu with UUID fa67564f-f463-402f-a1d2-2fa395b03f99 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,457 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert secondary_namenode_process with UUID 6ed52c5d-d900-4327-9bc2-138ee00a5735 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,457 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_pending_deletion_blocks with UUID 6bde7b44-4f70-471f-8d2e-7c83dd80de2b is disabled and will not be scheduled INFO 2018-10-29 03:13:25,457 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_daily with UUID 11cbc1bd-05ae-4198-ad37-1488f978528a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,458 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_ha_health with UUID 3abf9692-1adc-4a8c-b555-2b07af91cb51 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,458 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_heap_usage with UUID 1e8eea58-1d72-482c-a238-ef6db55c7b44 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,459 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_health_summary with UUID ad46e3d1-bb0a-4640-8c92-947ffb4343ed is disabled and will not be scheduled INFO 2018-10-29 03:13:25,459 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_unmounted_data_dir with UUID 5eff1c2f-5c21-409d-a748-95ebc12b65ed is disabled and will not be scheduled INFO 2018-10-29 03:13:25,459 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_daily with UUID 52157e50-ced5-4515-8d3d-4fc6527031fc is disabled and will not be scheduled INFO 2018-10-29 03:13:25,459 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_process with UUID a0e66a6d-dcec-4f62-9085-abd672b487b1 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,460 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_daily with UUID b2c4fadb-b5bd-42fa-8903-0d2572d445d4 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,460 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_blocks_health with UUID d9b77fc7-0f2f-4471-a0f9-85f429a32296 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,461 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_webui with UUID e4895dfa-f6ce-48bd-bc9e-884e63a8c996 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,461 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_webui with UUID 3760c848-0090-4aa8-ba71-f4154ec8d29a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,461 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert datanode_storage with UUID 9e1a3236-6123-47ec-b9c0-fe4f9492d676 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,462 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_hourly with UUID 1de98065-b774-4678-9f58-4b4a7b1052d7 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,462 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nfsgateway_process with UUID 4840a7ba-2563-4d51-8779-c6c773e715d4 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,462 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_daily with UUID ed7e8afb-5537-48b4-82c0-3ca687f0af35 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,463 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_queue_latency_hourly with UUID 1934f3b1-60d2-42cb-a26f-986625718162 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,464 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_processing_latency_daily with UUID 10849f29-8045-4ad2-96fd-03696df045e0 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,464 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert upgrade_finalized_state with UUID e7ccee0c-d9da-4a81-bf85-5465aaf01a8a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,464 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_client_rpc_processing_latency_hourly with UUID 323cf92d-7bab-4a26-a22b-2ea53015aa0a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,466 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_increase_in_storage_capacity_usage_weekly with UUID 6e553a1c-f113-4c78-8e20-609b6c83b7f5 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,467 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_directory_status with UUID 6c67bac7-5974-443e-93c8-d46c9c01ea0d is disabled and will not be scheduled INFO 2018-10-29 03:13:25,467 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_service_rpc_queue_latency_hourly with UUID 008b1529-a959-45de-b38b-6bb9ec6a1dcb is disabled and will not be scheduled INFO 2018-10-29 03:13:25,468 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_weekly with UUID a3afa569-6f22-4e52-82cf-cea506414e8d is disabled and will not be scheduled INFO 2018-10-29 03:13:25,468 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_hdfs_capacity_utilization with UUID 732e7066-2433-49b2-b15d-a1573a304adc is disabled and will not be scheduled INFO 2018-10-29 03:13:25,469 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_rpc_latency with UUID 94957f0d-9cf0-433b-9b11-19674aa59bd7 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,469 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert namenode_last_checkpoint with UUID e9b8a11a-e2f2-4e17-9d0f-28dee96d4b33 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,470 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert increase_nn_heap_usage_daily with UUID 8249d880-7176-4d61-a26e-f75ee507511c is disabled and will not be scheduled INFO 2018-10-29 03:13:25,470 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_webui with UUID 61b937eb-6483-48f3-b8c0-71efcba52e85 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,471 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert falcon_server_process with UUID 05817f4d-0e8e-4d2c-984c-9f9a13257eb6 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,472 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_health with UUID 27535dc0-51c4-4716-83d0-4ccb5fe86c70 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,472 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_webui with UUID 511da394-29e1-4aa5-a5b6-be5b93354f10 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,472 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_cpu with UUID bba36132-669d-4198-ba2a-ff67e31d695e is disabled and will not be scheduled INFO 2018-10-29 03:13:25,473 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_nodemanager_webui with UUID d1705ebc-df53-4dcb-8961-ce95b6de1ed5 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,473 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_resourcemanager_rpc_latency with UUID 22cb0500-1321-478b-84fa-5300723aac5c is disabled and will not be scheduled INFO 2018-10-29 03:13:25,474 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert nodemanager_health_summary with UUID 2f59de42-83f8-41ce-a9cc-b989ed54f7a6 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,475 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert yarn_app_timeline_server_webui with UUID 63aa90d0-c786-4bb2-a14b-f83ccb66bb96 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,475 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert zeppelin_server_status with UUID 366c4af1-d19a-48c2-a456-685fdf720e10 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,475 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_process with UUID 8cb278a0-7f9f-4bb9-a4f5-c881ee3a3efd is disabled and will not be scheduled INFO 2018-10-29 03:13:25,476 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_master_cpu with UUID 40a8dfee-fed8-4ade-914a-adbeefa76eba is disabled and will not be scheduled INFO 2018-10-29 03:13:25,479 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hbase_regionserver_process with UUID a4ff32b7-0a90-4e69-95c0-a5e2ce07e751 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,479 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert knox_gateway_process with UUID 6b93cb6c-b38a-440e-8b98-906fad0365ad is disabled and will not be scheduled INFO 2018-10-29 03:13:25,480 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_metastore_process with UUID efd35947-2ed7-4407-8152-ffc2e84ba94a is disabled and will not be scheduled INFO 2018-10-29 03:13:25,481 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_server_process with UUID e76e8d09-a6d7-45aa-97ed-ea516947d9f2 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,481 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert hive_webhcat_server_status with UUID 801c56b9-d65c-43da-a214-0a7835bdea65 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,482 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert kafka_broker_process with UUID 1704301f-7c12-4d93-a4fa-b6810dd2b324 is disabled and will not be scheduled INFO 2018-10-29 03:13:25,482 AlertSchedulerHandler.py:358 - [AlertScheduler] The alert ambari_agent_disk_usage with UUID 9c128bc6-d731-42cb-b139-624db29c4e7b is disabled and will not be scheduled INFO 2018-10-29 03:13:25,483 AlertSchedulerHandler.py:230 - [AlertScheduler] Reschedule Summary: 74 rescheduled, 0 unscheduled INFO 2018-10-29 03:13:25,485 Controller.py:512 - Registration response from sandbox.hortonworks.com was OK INFO 2018-10-29 03:13:25,493 Controller.py:517 - Resetting ActionQueue... INFO 2018-10-29 03:13:35,498 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:13:35,499 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:13:46,906 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:13:47,555 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:13:47,618 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:13:47,774 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:13:47,775 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:13:47,775 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:13:48,469 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.87 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1546824, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "14", "fqdn": "sandbox.hortonworks.com", "id": "maria_dev", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25746000", "used": "16667648", "percent": "40%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25746000", "used": "16667648", "percent": "40%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "52785", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540782828361, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540782827786, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:15:23,398 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:15:23,399 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:15:23,464 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540782828863, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:15:23,465 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:15:23,465 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:15:23,466 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:15:23,466 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:15:23,467 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:15:23,539 security.py:141 - Encountered communication error. Details: BadStatusLine('',) ERROR 2018-10-29 03:15:23,539 Controller.py:226 - Unable to connect to: https://sandbox.hortonworks.com:8441/agent/v1/register/sandbox.hortonworks.com Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 213, in registerWithServer self.addToStatusQueue(ret['statusCommands']) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 260, in addToStatusQueue self.updateComponents(commands[0]['clusterName']) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 558, in updateComponents response = self.sendRequest(self.componentsUrl + cluster_name, None) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 545, in sendRequest raise IOError('Request to {0} failed due to {1}'.format(url, str(exception))) IOError: Request to https://sandbox.hortonworks.com:8441/agent/v1/components/Sandbox failed due to Error occured during connecting to the server: ERROR 2018-10-29 03:15:23,564 Controller.py:227 - Error:Request to https://sandbox.hortonworks.com:8441/agent/v1/components/Sandbox failed due to Error occured during connecting to the server: WARNING 2018-10-29 03:15:23,565 Controller.py:228 - Sleeping for 11 seconds and then trying again INFO 2018-10-29 03:15:34,578 Controller.py:512 - Registration response from sandbox.hortonworks.com was OK INFO 2018-10-29 03:15:34,579 Controller.py:517 - Resetting ActionQueue... INFO 2018-10-29 03:15:44,593 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:15:44,593 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:15:44,600 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:15:45,489 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:15:45,495 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:15:45,503 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info WARNING 2018-10-29 03:15:45,505 NetUtil.py:98 - Failed to connect to https://sandbox.hortonworks.com:8440/connection_info due to [Errno 111] Connection refused INFO 2018-10-29 03:15:45,505 security.py:93 - SSL Connect being called.. connecting to the server ERROR 2018-10-29 03:15:45,505 Controller.py:456 - Connection to sandbox.hortonworks.com was lost (details=Request to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com failed due to [Errno 111] Connection refused) INFO 2018-10-29 03:15:56,518 Controller.py:471 - Waiting 9.9 for next heartbeat INFO 2018-10-29 03:16:06,423 Controller.py:478 - Wait for next heartbeat over INFO 2018-10-29 03:16:06,428 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info WARNING 2018-10-29 03:16:06,429 NetUtil.py:98 - Failed to connect to https://sandbox.hortonworks.com:8440/connection_info due to [Errno 111] Connection refused INFO 2018-10-29 03:16:06,429 security.py:93 - SSL Connect being called.. connecting to the server ERROR 2018-10-29 03:16:06,431 Controller.py:456 - Unable to reconnect to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com (attempts=1, details=Request to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com failed due to [Errno 111] Connection refused) INFO 2018-10-29 03:16:44,360 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info WARNING 2018-10-29 03:16:44,361 NetUtil.py:98 - Failed to connect to https://sandbox.hortonworks.com:8440/connection_info due to [Errno 111] Connection refused INFO 2018-10-29 03:16:44,361 security.py:93 - SSL Connect being called.. connecting to the server ERROR 2018-10-29 03:16:44,362 Controller.py:456 - Unable to reconnect to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com (attempts=2, details=Request to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com failed due to [Errno 111] Connection refused) INFO 2018-10-29 03:17:21,265 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:17:21,266 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:17:21,269 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info INFO 2018-10-29 03:17:22,329 security.py:93 - SSL Connect being called.. connecting to the server INFO 2018-10-29 03:17:22,727 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server. INFO 2018-10-29 03:17:22,908 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:17:22,909 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:17:22,910 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:17:23,657 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.87 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1546824, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "14", "fqdn": "sandbox.hortonworks.com", "id": "maria_dev", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25746000", "used": "16667648", "percent": "40%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25746000", "used": "16667648", "percent": "40%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "52785", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540783043543, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540783042921, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:19:07,296 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:19:07,296 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:19:07,366 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540783047631, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:19:07,366 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:19:07,367 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:19:07,367 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:19:07,367 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:19:07,367 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:19:07,369 security.py:141 - Encountered communication error. Details: BadStatusLine('',) ERROR 2018-10-29 03:19:07,370 Controller.py:226 - Unable to connect to: https://sandbox.hortonworks.com:8441/agent/v1/register/sandbox.hortonworks.com Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 213, in registerWithServer self.addToStatusQueue(ret['statusCommands']) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 260, in addToStatusQueue self.updateComponents(commands[0]['clusterName']) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 558, in updateComponents response = self.sendRequest(self.componentsUrl + cluster_name, None) File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 545, in sendRequest raise IOError('Request to {0} failed due to {1}'.format(url, str(exception))) IOError: Request to https://sandbox.hortonworks.com:8441/agent/v1/components/Sandbox failed due to Error occured during connecting to the server: ERROR 2018-10-29 03:19:07,371 Controller.py:227 - Error:Request to https://sandbox.hortonworks.com:8441/agent/v1/components/Sandbox failed due to Error occured during connecting to the server: WARNING 2018-10-29 03:19:07,371 Controller.py:228 - Sleeping for 0 seconds and then trying again INFO 2018-10-29 03:19:07,374 Controller.py:512 - Registration response from sandbox.hortonworks.com was OK INFO 2018-10-29 03:19:07,374 Controller.py:517 - Resetting ActionQueue... INFO 2018-10-29 03:19:17,386 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:19:17,386 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:19:17,393 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:19:18,061 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:19:18,068 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:19:18,073 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info WARNING 2018-10-29 03:19:18,074 NetUtil.py:98 - Failed to connect to https://sandbox.hortonworks.com:8440/connection_info due to [Errno 111] Connection refused INFO 2018-10-29 03:19:18,075 security.py:93 - SSL Connect being called.. connecting to the server ERROR 2018-10-29 03:19:18,076 Controller.py:456 - Connection to sandbox.hortonworks.com was lost (details=Request to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com failed due to [Errno 111] Connection refused) INFO 2018-10-29 03:19:30,086 Controller.py:471 - Waiting 9.9 for next heartbeat INFO 2018-10-29 03:19:39,988 Controller.py:478 - Wait for next heartbeat over INFO 2018-10-29 03:19:39,994 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info WARNING 2018-10-29 03:19:39,997 NetUtil.py:98 - Failed to connect to https://sandbox.hortonworks.com:8440/connection_info due to [Errno 111] Connection refused INFO 2018-10-29 03:19:39,998 security.py:93 - SSL Connect being called.. connecting to the server ERROR 2018-10-29 03:19:39,999 Controller.py:456 - Unable to reconnect to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com (attempts=1, details=Request to https://sandbox.hortonworks.com:8441/agent/v1/heartbeat/sandbox.hortonworks.com failed due to [Errno 111] Connection refused) INFO 2018-10-29 03:20:12,925 NetUtil.py:67 - Connecting to https://sandbox.hortonworks.com:8440/connection_info INFO 2018-10-29 03:20:13,528 security.py:93 - SSL Connect being called.. connecting to the server INFO 2018-10-29 03:20:13,793 security.py:60 - SSL connection established. Two-way SSL authentication is turned off on the server. INFO 2018-10-29 03:20:13,923 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:20:14,640 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.87 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1546824, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "14", "fqdn": "sandbox.hortonworks.com", "id": "maria_dev", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25746000", "used": "16667648", "percent": "40%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25746000", "used": "16667648", "percent": "40%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "52785", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540783214536, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540783213934, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:21:43,871 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:21:43,871 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:21:43,932 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540783218226, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:21:43,932 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:21:43,933 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:21:43,933 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:21:43,933 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:21:43,934 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:21:43,963 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:21:44,876 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:21:46,069 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_MONITOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:21:48,878 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_GRAFANA of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:21:49,760 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_CLIENT of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:21:50,647 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_SERVER of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:21:53,337 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_SERVER of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:21:57,469 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_MASTER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:21:58,772 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_REGIONSERVER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:21:59,655 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NFS_GATEWAY of service HDFS of cluster Sandbox to the queue. INFO 2018-10-29 03:22:05,878 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HIVE_METASTORE of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:22:08,544 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HIVE_CLIENT of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:22:09,903 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component WEBHCAT_SERVER of service HIVE of cluster Sandbox to the queue. INFO 2018-10-29 03:22:15,577 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component MAPREDUCE2_CLIENT of service MAPREDUCE2 of cluster Sandbox to the queue. INFO 2018-10-29 03:22:18,282 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component OOZIE_CLIENT of service OOZIE of cluster Sandbox to the queue. INFO 2018-10-29 03:22:22,287 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_TAGSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:22:23,939 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component RANGER_USERSYNC of service RANGER of cluster Sandbox to the queue. INFO 2018-10-29 03:22:24,913 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SLIDER of service SLIDER of cluster Sandbox to the queue. INFO 2018-10-29 03:22:28,601 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY_SERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:22:30,057 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK_THRIFTSERVER of service SPARK of cluster Sandbox to the queue. INFO 2018-10-29 03:22:31,489 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component LIVY2_SERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:22:32,374 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_CLIENT of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:22:33,259 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SPARK2_THRIFTSERVER of service SPARK2 of cluster Sandbox to the queue. INFO 2018-10-29 03:22:35,883 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SQOOP of service SQOOP of cluster Sandbox to the queue. INFO 2018-10-29 03:22:37,200 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component SUPERVISOR of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:22:38,612 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NIMBUS of service STORM of cluster Sandbox to the queue. INFO 2018-10-29 03:22:43,795 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component NODEMANAGER of service YARN of cluster Sandbox to the queue. INFO 2018-10-29 03:22:49,015 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZEPPELIN_MASTER of service ZEPPELIN of cluster Sandbox to the queue. INFO 2018-10-29 03:22:50,352 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_SERVER of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:22:51,243 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ZOOKEEPER_CLIENT of service ZOOKEEPER of cluster Sandbox to the queue. INFO 2018-10-29 03:23:02,495 Controller.py:304 - Heartbeat (response id = 0) with server is running... INFO 2018-10-29 03:23:02,495 Controller.py:311 - Building heartbeat message INFO 2018-10-29 03:23:13,537 Heartbeat.py:90 - Adding host info/state to heartbeat message. INFO 2018-10-29 03:23:14,137 Hardware.py:174 - Some mount points were ignored: /, /dev, /sys/fs/cgroup, /hadoop, /etc/resolv.conf, /etc/hostname, /etc/hosts, /dev/shm INFO 2018-10-29 03:23:14,197 Controller.py:320 - Sending Heartbeat (id = 0) INFO 2018-10-29 03:23:14,417 Controller.py:332 - Heartbeat response received (id = 0) INFO 2018-10-29 03:23:14,418 Controller.py:341 - Heartbeat interval is 10 seconds INFO 2018-10-29 03:23:14,418 Controller.py:353 - RegistrationCommand received - repeat agent registration INFO 2018-10-29 03:23:15,100 Controller.py:170 - Registering with sandbox.hortonworks.com (172.17.0.2) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "hortonworks.com", "physicalprocessorcount": 4, "kernelrelease": "4.11.0-1.el7.elrepo.x86_64", "uptime_days": "0", "memorytotal": 10419156, "swapfree": "4.87 GB", "memorysize": 10419156, "osfamily": "redhat", "swapsize": "4.88 GB", "processorcount": 4, "netmask": "255.255.0.0", "timezone": "UTC", "hardwareisa": "x86_64", "memoryfree": 1546824, "operatingsystem": "centos", "kernelmajversion": "4.11", "kernelversion": "4.11.0", "macaddress": "02:42:AC:11:00:02", "operatingsystemrelease": "6.9", "ipaddress": "172.17.0.2", "hostname": "sandbox", "uptime_hours": "14", "fqdn": "sandbox.hortonworks.com", "id": "maria_dev", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "25746000", "used": "16667648", "percent": "40%", "device": "overlay", "mountpoint": "/", "type": "overlay", "size": "44707764"}, {"available": "25746000", "used": "16667648", "percent": "40%", "device": "/dev/sda3", "mountpoint": "/hadoop", "type": "ext4", "size": "44707764"}], "hardwaremodel": "x86_64", "uptime_seconds": "52785", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.0.5", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1540783394993, "activeJavaProcs": [{"command": "/usr/lib/jvm/java/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode", "pid": 516, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -Xloggc:/var/log/hadoop/hdfs/gc.log-201810281232 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError=\\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode", "pid": 519, "hadoop": true, "user": "hdfs"}, {"command": "java -Dproc_rangeradmin -XX:MaxPermSize=256m -Xmx1024m -Xms1024m -Duser.timezone=UTC -Dservername=rangeradmin -Dlogdir=/var/log/ranger/admin -Dcatalina.base=/usr/hdp/2.6.0.3-8/ranger-admin/ews -cp /usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf:/usr/hdp/2.6.0.3-8/ranger-admin/ews/lib/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/ranger_jaas/*:/usr/hdp/2.6.0.3-8/ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger_jaas:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85.x86_64/lib/*:/*: org.apache.ranger.server.tomcat.EmbeddedServer", "pid": 863, "hadoop": false, "user": "ranger"}, {"command": "java -Dproc_rangerusersync -Dlog4j.configuration=file:/etc/ranger/usersync/conf/log4j.properties -Dlogdir=/var/log/ranger/usersync -cp /usr/hdp/2.6.0.3-8/ranger-usersync/dist/*:/usr/hdp/2.6.0.3-8/ranger-usersync/lib/*:/usr/hdp/2.6.0.3-8/ranger-usersync/conf:/* org.apache.ranger.authentication.UnixAuthenticationService -enableUnixAuth", "pid": 984, "hadoop": false, "user": "ranger"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap", "pid": 1185, "hadoop": true, "user": "root"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 1208, "hadoop": true, "user": "root"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1457, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.6.0.3-8 -Xmx1024m -Xmx1024m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.6.0.3-8/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start", "pid": 1555, "hadoop": true, "user": "oozie"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Xmx1195m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/usr/hdp/2.6.0.3-8/hive/bin/../conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive/lib/hive-service-1.2.1000.2.6.0.3-8.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive", "pid": 1634, "hadoop": true, "user": "hive"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.6.0.3-8/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1000.2.6.0.3-8.jar org.apache.hive.hcatalog.templeton.Main", "pid": 2020, "hadoop": true, "user": "hcat"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-historyserver/conf/:/usr/hdp/current/spark2-historyserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.history.HistoryServer", "pid": 2192, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Xmx2g -cp /usr/hdp/current/livy2-server/jars/*:/usr/hdp/current/livy2-server/conf:/etc/spark2/conf:/etc/hadoop/conf: com.cloudera.livy.server.LivyServer", "pid": 2465, "hadoop": true, "user": "livy"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=2.6.0.3-8 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/current/hadoop-client/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal", "pid": 2481, "hadoop": true, "user": "spark"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer", "pid": 3408, "hadoop": true, "user": "mapred"}, {"command": "/usr/lib/jvm/java/bin/java -Dhdp.version=None -Dspark.executor.memory=512m -Dspark.executor.instances=2 -Dspark.yarn.queue=default -Dfile.encoding=UTF-8 -Xms1024m -Xmx1024m -XX:MaxPermSize=512m -Dlog4j.configuration=file:///usr/hdp/current/zeppelin-server/conf/log4j.properties -Dzeppelin.log.file=/var/log/zeppelin/zeppelin-zeppelin-sandbox.hortonworks.com.log -cp ::/usr/hdp/current/zeppelin-server/lib/interpreter/*:/usr/hdp/current/zeppelin-server/lib/*:/usr/hdp/current/zeppelin-server/*::/usr/hdp/current/zeppelin-server/conf org.apache.zeppelin.server.ZeppelinServer", "pid": 3468, "hadoop": false, "user": "zeppelin"}, {"command": "jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf -Xmx250m -Dhdp.version=2.6.0.3-8 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter", "pid": 3583, "hadoop": true, "user": "hdfs"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Drm.audit.logger=INFO,RMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager", "pid": 6891, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_timelineserver -Xmx250m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djava.util.logging.config.file=ats.logging.properties -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-timelineserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/timelineserver-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer", "pid": 7505, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dproc_nodemanager -Xmx512m -Dhdp.version=2.6.0.3-8 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -server -Dnm.audit.logger=INFO,NMAUDIT -Dnm.audit.logger=INFO,NMAUDIT -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.6.0.3-8/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -classpath /usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/conf:/usr/hdp/2.6.0.3-8/hadoop/lib/*:/usr/hdp/2.6.0.3-8/hadoop/.//*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/./:/usr/hdp/2.6.0.3-8/hadoop-hdfs/lib/*:/usr/hdp/2.6.0.3-8/hadoop-hdfs/.//*:/usr/hdp/2.6.0.3-8/hadoop-yarn/lib/*:/usr/hdp/2.6.0.3-8/hadoop-yarn/.//*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/lib/*:/usr/hdp/2.6.0.3-8/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.6.0.3-8/tez/*:/usr/hdp/2.6.0.3-8/tez/lib/*:/usr/hdp/2.6.0.3-8/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/usr/hdp/2.6.0.3-8/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager", "pid": 7753, "hadoop": true, "user": "yarn"}, {"command": "/usr/lib/jvm/java/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.log.file=zookeeper-zookeeper-server-sandbox.hortonworks.com.log -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.6.0.3-8.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/usr/hdp/current/zookeeper-server/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg", "pid": 16042, "hadoop": true, "user": "zookeeper"}], "liveServices": [{"status": "Unhealthy", "name": "ntpd or chronyd", "desc": "ntpd is stopped\\n"}]}, "reverseLookup": true, "alternatives": [{"name": "hue-conf", "target": "/etc/hue/conf.empty"}], "umask": "18", "firewallName": "iptables", "stackFoldersAndFiles": [{"type": "directory", "name": "/etc/hadoop"}, {"type": "directory", "name": "/etc/hbase"}, {"type": "directory", "name": "/etc/hive"}, {"type": "directory", "name": "/etc/oozie"}, {"type": "directory", "name": "/etc/sqoop"}, {"type": "directory", "name": "/etc/hue"}, {"type": "directory", "name": "/etc/zookeeper"}, {"type": "directory", "name": "/etc/flume"}, {"type": "directory", "name": "/etc/storm"}, {"type": "directory", "name": "/etc/hive-hcatalog"}, {"type": "directory", "name": "/etc/tez"}, {"type": "directory", "name": "/etc/falcon"}, {"type": "directory", "name": "/etc/knox"}, {"type": "directory", "name": "/etc/hive-webhcat"}, {"type": "directory", "name": "/etc/kafka"}, {"type": "directory", "name": "/etc/slider"}, {"type": "directory", "name": "/etc/storm-slider-client"}, {"type": "directory", "name": "/etc/spark"}, {"type": "directory", "name": "/etc/pig"}, {"type": "directory", "name": "/etc/phoenix"}, {"type": "directory", "name": "/etc/ranger"}, {"type": "directory", "name": "/etc/ambari-metrics-collector"}, {"type": "directory", "name": "/etc/ambari-metrics-monitor"}, {"type": "directory", "name": "/etc/atlas"}, {"type": "directory", "name": "/etc/zeppelin"}, {"type": "directory", "name": "/var/run/hadoop"}, {"type": "directory", "name": "/var/run/hbase"}, {"type": "directory", "name": "/var/run/hive"}, {"type": "directory", "name": "/var/run/oozie"}, {"type": "directory", "name": "/var/run/sqoop"}, {"type": "directory", "name": "/var/run/zookeeper"}, {"type": "directory", "name": "/var/run/flume"}, {"type": "directory", "name": "/var/run/storm"}, {"type": "directory", "name": "/var/run/hive-hcatalog"}, {"type": "directory", "name": "/var/run/falcon"}, {"type": "directory", "name": "/var/run/webhcat"}, {"type": "directory", "name": "/var/run/hadoop-yarn"}, {"type": "directory", "name": "/var/run/hadoop-mapreduce"}, {"type": "directory", "name": "/var/run/knox"}, {"type": "directory", "name": "/var/run/kafka"}, {"type": "directory", "name": "/var/run/spark"}, {"type": "directory", "name": "/var/run/ranger"}, {"type": "directory", "name": "/var/run/ambari-metrics-collector"}, {"type": "directory", "name": "/var/run/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/run/atlas"}, {"type": "directory", "name": "/var/run/zeppelin"}, {"type": "directory", "name": "/var/log/hadoop"}, {"type": "directory", "name": "/var/log/hbase"}, {"type": "directory", "name": "/var/log/hive"}, {"type": "directory", "name": "/var/log/oozie"}, {"type": "directory", "name": "/var/log/sqoop"}, {"type": "directory", "name": "/var/log/hue"}, {"type": "directory", "name": "/var/log/zookeeper"}, {"type": "directory", "name": "/var/log/flume"}, {"type": "directory", "name": "/var/log/storm"}, {"type": "directory", "name": "/var/log/hive-hcatalog"}, {"type": "directory", "name": "/var/log/falcon"}, {"type": "directory", "name": "/var/log/webhcat"}, {"type": "directory", "name": "/var/log/hadoop-yarn"}, {"type": "directory", "name": "/var/log/hadoop-mapreduce"}, {"type": "directory", "name": "/var/log/knox"}, {"type": "directory", "name": "/var/log/kafka"}, {"type": "directory", "name": "/var/log/spark"}, {"type": "directory", "name": "/var/log/ranger"}, {"type": "directory", "name": "/var/log/ambari-metrics-collector"}, {"type": "directory", "name": "/var/log/ambari-metrics-monitor"}, {"type": "directory", "name": "/var/log/atlas"}, {"type": "directory", "name": "/var/log/zeppelin"}, {"type": "directory", "name": "/usr/lib/flume"}, {"type": "directory", "name": "/usr/lib/storm"}, {"type": "directory", "name": "/usr/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/hive"}, {"type": "directory", "name": "/var/lib/oozie"}, {"type": "directory", "name": "/var/lib/hue"}, {"type": "directory", "name": "/var/lib/flume"}, {"type": "directory", "name": "/var/lib/hadoop-hdfs"}, {"type": "directory", "name": "/var/lib/hadoop-yarn"}, {"type": "directory", "name": "/var/lib/hadoop-mapreduce"}, {"type": "directory", "name": "/var/lib/knox"}, {"type": "directory", "name": "/var/lib/slider"}, {"type": "directory", "name": "/var/lib/spark"}, {"type": "directory", "name": "/var/lib/ranger"}, {"type": "directory", "name": "/var/lib/ambari-metrics-collector"}, {"type": "directory", "name": "/var/lib/zeppelin"}, {"type": "directory", "name": "/var/tmp/oozie"}, {"type": "directory", "name": "/var/tmp/sqoop"}, {"type": "directory", "name": "/tmp/hive"}, {"type": "directory", "name": "/tmp/ambari-qa"}, {"type": "directory", "name": "/tmp/hadoop-hdfs"}, {"type": "directory", "name": "/tmp/spark"}, {"type": "directory", "name": "/tmp/ranger"}, {"type": "directory", "name": "/hadoop/oozie"}, {"type": "directory", "name": "/hadoop/zookeeper"}, {"type": "directory", "name": "/hadoop/hdfs"}, {"type": "directory", "name": "/hadoop/storm"}, {"type": "directory", "name": "/hadoop/falcon"}, {"type": "directory", "name": "/hadoop/yarn"}, {"type": "directory", "name": "/kafka-logs"}], "existingUsers": [{"status": "Available", "name": "hive", "homeDir": "/home/hive"}, {"status": "Available", "name": "storm", "homeDir": "/home/storm"}, {"status": "Available", "name": "zookeeper", "homeDir": "/home/zookeeper"}, {"status": "Available", "name": "oozie", "homeDir": "/home/oozie"}, {"status": "Available", "name": "atlas", "homeDir": "/home/atlas"}, {"status": "Available", "name": "ams", "homeDir": "/home/ams"}, {"status": "Available", "name": "falcon", "homeDir": "/home/falcon"}, {"status": "Available", "name": "ranger", "homeDir": "/home/ranger"}, {"status": "Available", "name": "tez", "homeDir": "/home/tez"}, {"status": "Available", "name": "zeppelin", "homeDir": "/home/zeppelin"}, {"status": "Available", "name": "spark", "homeDir": "/home/spark"}, {"status": "Available", "name": "ambari-qa", "homeDir": "/home/ambari-qa"}, {"status": "Available", "name": "flume", "homeDir": "/home/flume"}, {"status": "Available", "name": "kafka", "homeDir": "/home/kafka"}, {"status": "Available", "name": "hdfs", "homeDir": "/home/hdfs"}, {"status": "Available", "name": "sqoop", "homeDir": "/home/sqoop"}, {"status": "Available", "name": "yarn", "homeDir": "/home/yarn"}, {"status": "Available", "name": "mapred", "homeDir": "/home/mapred"}, {"status": "Available", "name": "hbase", "homeDir": "/home/hbase"}, {"status": "Available", "name": "knox", "homeDir": "/home/knox"}, {"status": "Available", "name": "hcat", "homeDir": "/home/hcat"}, {"status": "Available", "name": "hue", "homeDir": "/usr/lib/hue"}, {"status": "Available", "name": "kms", "homeDir": "/var/lib/ranger/kms"}], "firewallRunning": false}, "timestamp": 1540783394432, "hostname": "sandbox.hortonworks.com", "responseId": -1, "publicHostname": "sandbox.hortonworks.com"}') INFO 2018-10-29 03:24:38,478 Controller.py:196 - Registration Successful (response id = 0) INFO 2018-10-29 03:24:38,479 ClusterConfiguration.py:119 - Updating cached configurations for cluster Sandbox INFO 2018-10-29 03:24:38,546 RecoveryManager.py:577 - RecoverConfig = {'components': 'FLUME_HANDLER,HIVE_METASTORE,HIVE_SERVER,HIVE_CLIENT,WEBHCAT_SERVER,HISTORYSERVER,MAPREDUCE2_CLIENT,OOZIE_SERVER,OOZIE_CLIENT,PIG,RANGER_ADMIN,RANGER_TAGSYNC,RANGER_USERSYNC,SLIDER,LIVY2_SERVER,SPARK2_CLIENT,SPARK2_THRIFTSERVER,SPARK2_JOBHISTORYSERVER,SQOOP,TEZ_CLIENT,NODEMANAGER,YARN_CLIENT,APP_TIMELINE_SERVER,RESOURCEMANAGER,ZEPPELIN_MASTER', 'maxCount': '6', 'maxLifetimeCount': '1024', 'recoveryTimestamp': 1540783395598, 'retryGap': '5', 'type': 'AUTO_START', 'windowInMinutes': '60'} INFO 2018-10-29 03:24:38,546 RecoveryManager.py:677 - ==> Auto recovery is enabled with maximum 6 in 60 minutes with gap of 5 minutes between and lifetime max being 1024. Enabled components - FLUME_HANDLER, HIVE_METASTORE, HIVE_SERVER, HIVE_CLIENT, WEBHCAT_SERVER, HISTORYSERVER, MAPREDUCE2_CLIENT, OOZIE_SERVER, OOZIE_CLIENT, PIG, RANGER_ADMIN, RANGER_TAGSYNC, RANGER_USERSYNC, SLIDER, LIVY2_SERVER, SPARK2_CLIENT, SPARK2_THRIFTSERVER, SPARK2_JOBHISTORYSERVER, SQOOP, TEZ_CLIENT, NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER, ZEPPELIN_MASTER INFO 2018-10-29 03:24:38,546 AmbariConfig.py:316 - Updating config property (agent.check.remote.mounts) with value (false) INFO 2018-10-29 03:24:38,548 AmbariConfig.py:316 - Updating config property (agent.auto.cache.update) with value (true) INFO 2018-10-29 03:24:38,548 AmbariConfig.py:316 - Updating config property (agent.check.mounts.timeout) with value (0) INFO 2018-10-29 03:24:38,548 Controller.py:258 - Adding 54 status commands. Heartbeat id = 0 INFO 2018-10-29 03:24:38,564 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:24:39,452 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component INFRA_SOLR_CLIENT of service AMBARI_INFRA of cluster Sandbox to the queue. INFO 2018-10-29 03:24:42,004 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_COLLECTOR of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:24:43,381 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component METRICS_GRAFANA of service AMBARI_METRICS of cluster Sandbox to the queue. INFO 2018-10-29 03:24:44,252 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_CLIENT of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:24:45,160 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component ATLAS_SERVER of service ATLAS of cluster Sandbox to the queue. INFO 2018-10-29 03:24:47,748 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FALCON_SERVER of service FALCON of cluster Sandbox to the queue. INFO 2018-10-29 03:24:49,072 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component FLUME_HANDLER of service FLUME of cluster Sandbox to the queue. INFO 2018-10-29 03:24:50,316 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_CLIENT of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:24:53,095 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component HBASE_REGIONSERVER of service HBASE of cluster Sandbox to the queue. INFO 2018-10-29 03:24:55,856 StatusCommandsExecutor.py:65 - Adding STATUS_COMMAND for component DATANODE of service HDFS of cluster Sandbox to the queue.