Support Questions

Find answers, ask questions, and share your expertise

Oozie conflicts with existing Tomcat installation

avatar
Expert Contributor

I have lost track of the number of unsuccessful attempts at the deployment through the console. All requisites, such as selinux disabled, THS, ntp sync, password less sync from master node (amari server) to data node, etc are fine.

  1. I once again got a Java Process warning, after hosts were successfully registered, on the masternode.

    Process Issues (1) The following process should not be running /usr/lib/jvm/jre/bin/java -classpath

  2. Warnings on namenode - end to end. No warning messages were shown
  3. Failures on the datanode, right from DataNode Install Help please!

    Thanks

  1. resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`' returned 1. Traceback (most recent call last):
      File "/usr/bin/hdp-select", line 375, in <module>
        setPackages(pkgs, args[2], options.rpm_mode)
      File "/usr/bin/hdp-select", line 268, in setPackages
        os.symlink(target + "/" + leaves[pkg], linkname)
    OSError: [Errno 17] File exists
1 ACCEPTED SOLUTION

avatar
Expert Contributor

This is resolved.

Possible Cause

The main problem was the oozie not finding "/etc/tomcat/conf/ssl/server.xml". The oozie server has it own app-server; it should not therefore refer / conflict with the tomcat app server, which have deployed for our own purpose.

setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

It did however refer to /etc/tomcat. We had configurations settings at .bashrc, /etc/profile and /etc/init.d/tomcat re-Catalina Base and Catalina_Home.

The oozie-setup.sh has references to Catalina_Base in many places. This may be the reason why it was referring to the wrong path.

Solution:

Code walk through on the shell files of oozie and other services, which did not start.

Commented references to Catalina_Home and Catalina_Base in /etc/profile and etc/init/d/tomcat.

Impact:

All hadoop services have started

Caution

Users who may want to run Tomcat app server on the same server as Hadoop could create conflict if configurations for tomcat app server is set in the /etc/profile and etc/init.d/tomcat.

The app server may either need to be run on a separate server than on the same server as oozie or enable user specific permission only through .bashrc.

View solution in original post

13 REPLIES 13

avatar
Expert Contributor

Yet another failure at the app timeline server install. I am unable to understand why this deployment has been failing in the last 3+ weeks. I would very much appreciate guidance and recommendations please.

Name node

2016-03-15 17:12:20,084 - Could not determine HDP version for component hadoop-yarn-timelineserver by calling '/usr/bin/hdp-select status hadoop-yarn-timelineserver > /tmp/tmpeEX1z1'. Return Code: 127, Output: .

resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`' returned 127. ambari-python-wrap: can't open file '/usr/bin/hdp-select': [Errno 2] No such file or directory
/var/lib/ambari-agent/ambari-sudo.sh: line 50: /usr/bin/hdp-select: No such file or directory
                      stdout:    /var/lib/ambari-agent/data/output-37.txt 
     

Data node

2016-03-15 17:12:28,257 - Could not determine HDP version for component hadoop-hdfs-datanode by calling '/usr/bin/hdp-select status hadoop-hdfs-datanode > /tmp/tmp0Iaoqk'. Return Code: 127, Output: .

resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.4 | tail -1`' returned 127. ambari-python-wrap: can't open file '/usr/bin/hdp-select': [Errno 2] No such file or directory
/var/lib/ambari-agent/ambari-sudo.sh: line 50: /usr/bin/hdp-select: No such file or directory
                      stdout:    /var/lib/ambari-agent/data/output-9.txt 


avatar
Expert Contributor

I deleted both instances in ec2, setup new ones and configured everything from scratch. I did make a difference, though; however, the HDP 2.4 installed and started services with some warnings. I set them out here below. I would appreciate if some one could guide me from here please.

Many thanks for the help.

2897-capture.png

Namenode: Oozie server start and warnings thereafter till the end

OOzie server start

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py", line 195, in <module>
    OozieServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py", line 73, in start
    self.configure(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_server.py", line 67, in configure
    oozie(is_server=True)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py", line 156, in oozie
    oozie_server_specific()
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie.py", line 248, in oozie_server_specific
    not_if  = format("{no_op_test} || {skip_recreate_sharelib} && {skip_prepare_war_cmd}")
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'cd /var/tmp/oozie && /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war ' returned 255.   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}
  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_60
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting JAVA_LIBRARY_PATH=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}
  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_60
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting JAVA_LIBRARY_PATH=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "

INFO: Adding extension: /usr/hdp/current/oozie-server/libext/falcon-oozie-el-extension-0.6.1.2.4.0.0-169.jar
INFO: Adding extension: /usr/hdp/current/oozie-server/libext/mysql-connector-java.jar

File/Dir does no exist: /etc/tomcat/conf/ssl/server.xml
                      stdout:    /var/lib/ambari-agent/data/output-164.txt 

                      2016-03-18 19:21:03,672 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-03-18 19:21:03,672 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-03-18 19:21:03,673 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-18 19:21:03,705 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-03-18 19:21:03,705 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-18 19:21:03,736 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-03-18 19:21:03,736 - Ensuring that hadoop has the correct symlink structure
2016-03-18 19:21:03,736 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-18 19:21:03,842 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-03-18 19:21:03,842 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-03-18 19:21:03,842 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-18 19:21:03,874 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-03-18 19:21:03,874 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-18 19:21:03,904 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-03-18 19:21:03,904 - Ensuring that hadoop has the correct symlink structure
2016-03-18 19:21:03,905 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-18 19:21:03,906 - Group['spark'] {}
2016-03-18 19:21:03,907 - Group['hadoop'] {}
2016-03-18 19:21:03,907 - Group['users'] {}
2016-03-18 19:21:03,907 - Group['knox'] {}
2016-03-18 19:21:03,907 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,908 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,909 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-18 19:21:03,909 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,910 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,910 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-18 19:21:03,911 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-18 19:21:03,911 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,912 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,912 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,913 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-03-18 19:21:03,913 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,914 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,914 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,915 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,915 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,916 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,917 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,917 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,918 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-03-18 19:21:03,918 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-18 19:21:03,919 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-03-18 19:21:03,933 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-03-18 19:21:03,934 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-03-18 19:21:03,934 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-03-18 19:21:03,935 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-03-18 19:21:03,949 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-03-18 19:21:03,950 - Group['hdfs'] {}
2016-03-18 19:21:03,950 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-03-18 19:21:03,951 - Directory['/etc/hadoop'] {'mode': 0755}
2016-03-18 19:21:03,962 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-03-18 19:21:03,963 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-03-18 19:21:03,972 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-03-18 19:21:03,988 - Skipping Execute[('setenforce', '0')] due to not_if
2016-03-18 19:21:03,988 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-03-18 19:21:03,990 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-03-18 19:21:03,991 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-03-18 19:21:03,995 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-03-18 19:21:03,997 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-03-18 19:21:03,998 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-03-18 19:21:04,005 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-03-18 19:21:04,006 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-03-18 19:21:04,006 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-03-18 19:21:04,010 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-03-18 19:21:04,023 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-03-18 19:21:04,181 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-03-18 19:21:04,182 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-03-18 19:21:04,182 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-03-18 19:21:04,213 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-03-18 19:21:04,214 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-03-18 19:21:04,245 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-03-18 19:21:04,245 - Ensuring that hadoop has the correct symlink structure
2016-03-18 19:21:04,245 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-18 19:21:04,251 - HdfsResource['/user/oozie'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'oozie', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0775}
2016-03-18 19:21:04,252 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/user/oozie?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpYgSYWa 2>/tmp/tmp8TNTaj''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:04,320 - call returned (0, '')
2016-03-18 19:21:04,321 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://namenode.teg:50070/webhdfs/v1/user/oozie?op=MKDIRS&user.name=hdfs'"'"' 1>/tmp/tmpANxf4U 2>/tmp/tmpWZJxIy''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:04,375 - call returned (0, '')
2016-03-18 19:21:04,377 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://namenode.teg:50070/webhdfs/v1/user/oozie?op=SETPERMISSION&user.name=hdfs&permission=775'"'"' 1>/tmp/tmp0z6s37 2>/tmp/tmpfJnqSM''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:04,430 - call returned (0, '')
2016-03-18 19:21:04,431 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://namenode.teg:50070/webhdfs/v1/user/oozie?op=SETOWNER&user.name=hdfs&owner=oozie&group='"'"' 1>/tmp/tmpfcDteX 2>/tmp/tmp3N9BSW''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:04,485 - call returned (0, '')
2016-03-18 19:21:04,485 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2016-03-18 19:21:04,486 - Directory['/usr/hdp/current/oozie-server/conf'] {'owner': 'oozie', 'group': 'hadoop', 'recursive': True}
2016-03-18 19:21:04,487 - Changing owner for /usr/hdp/current/oozie-server/conf from 0 to oozie
2016-03-18 19:21:04,487 - Changing group for /usr/hdp/current/oozie-server/conf from 0 to hadoop
2016-03-18 19:21:04,488 - XmlConfig['oozie-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/oozie-server/conf', 'mode': 0664, 'configuration_attributes': {}, 'owner': 'oozie', 'configurations': ...}
2016-03-18 19:21:04,497 - Generating config: /usr/hdp/current/oozie-server/conf/oozie-site.xml
2016-03-18 19:21:04,497 - File['/usr/hdp/current/oozie-server/conf/oozie-site.xml'] {'owner': 'oozie', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0664, 'encoding': 'UTF-8'}
2016-03-18 19:21:04,520 - File['/usr/hdp/current/oozie-server/conf/oozie-env.sh'] {'content': InlineTemplate(...), 'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:04,520 - Writing File['/usr/hdp/current/oozie-server/conf/oozie-env.sh'] because contents don't match
2016-03-18 19:21:04,521 - File['/usr/hdp/current/oozie-server/conf/oozie-log4j.properties'] {'content': ..., 'owner': 'oozie', 'group': 'hadoop', 'mode': 0644}
2016-03-18 19:21:04,525 - File['/usr/hdp/current/oozie-server/conf/adminusers.txt'] {'content': Template('adminusers.txt.j2'), 'owner': 'oozie', 'group': 'hadoop', 'mode': 0644}
2016-03-18 19:21:04,525 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://namenode.teg:8081/resources/DBConnectionVerification.jar')}
2016-03-18 19:21:04,525 - Not downloading the file from http://namenode.teg:8081/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-03-18 19:21:04,526 - File['/usr/hdp/current/oozie-server/conf/hadoop-config.xml'] {'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:04,526 - File['/usr/hdp/current/oozie-server/conf/oozie-default.xml'] {'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:04,526 - Directory['/usr/hdp/current/oozie-server/conf/action-conf'] {'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:04,527 - File['/usr/hdp/current/oozie-server/conf/action-conf/hive.xml'] {'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:04,527 - File['/var/run/oozie/oozie.pid'] {'action': ['delete'], 'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'"}
2016-03-18 19:21:04,573 - Directory['/usr/hdp/current/oozie-server//var/tmp/oozie'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,574 - Creating directory Directory['/usr/hdp/current/oozie-server//var/tmp/oozie'] since it doesn't exist.
2016-03-18 19:21:04,574 - Changing owner for /usr/hdp/current/oozie-server//var/tmp/oozie from 0 to oozie
2016-03-18 19:21:04,574 - Changing group for /usr/hdp/current/oozie-server//var/tmp/oozie from 0 to hadoop
2016-03-18 19:21:04,575 - Directory['/var/run/oozie'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,575 - Changing group for /var/run/oozie from 984 to hadoop
2016-03-18 19:21:04,575 - Directory['/var/log/oozie'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,575 - Changing group for /var/log/oozie from 984 to hadoop
2016-03-18 19:21:04,576 - Directory['/var/tmp/oozie'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,576 - Changing group for /var/tmp/oozie from 984 to hadoop
2016-03-18 19:21:04,576 - Directory['/hadoop/oozie/data'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,576 - Creating directory Directory['/hadoop/oozie/data'] since it doesn't exist.
2016-03-18 19:21:04,577 - Changing owner for /hadoop/oozie/data from 0 to oozie
2016-03-18 19:21:04,577 - Changing group for /hadoop/oozie/data from 0 to hadoop
2016-03-18 19:21:04,577 - Directory['/usr/hdp/current/oozie-server'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,577 - Changing owner for /usr/hdp/current/oozie-server from 0 to oozie
2016-03-18 19:21:04,577 - Changing group for /usr/hdp/current/oozie-server from 0 to hadoop
2016-03-18 19:21:04,578 - Directory['/usr/hdp/current/oozie-server/oozie-server/webapps'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,578 - Changing group for /usr/hdp/current/oozie-server/oozie-server/webapps from 984 to hadoop
2016-03-18 19:21:04,578 - Directory['/usr/hdp/current/oozie-server/oozie-server/conf'] {'owner': 'oozie', 'cd_access': 'a', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2016-03-18 19:21:04,578 - Changing owner for /usr/hdp/current/oozie-server/oozie-server/conf from 0 to oozie
2016-03-18 19:21:04,578 - Changing group for /usr/hdp/current/oozie-server/oozie-server/conf from 0 to hadoop
2016-03-18 19:21:04,579 - Directory['/usr/hdp/current/oozie-server/oozie-server'] {'owner': 'oozie', 'recursive': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2016-03-18 19:21:04,579 - Changing group for /usr/hdp/current/oozie-server/oozie-server from 984 to hadoop
2016-03-18 19:21:04,579 - Directory['/usr/hdp/current/oozie-server/libext'] {'recursive': True}
2016-03-18 19:21:04,580 - Execute[('tar', '-xvf', '/usr/hdp/current/oozie-server/oozie-sharelib.tar.gz', '-C', '/usr/hdp/current/oozie-server')] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1' || test -f /usr/hdp/current/oozie-server/.hashcode && test -d /usr/hdp/current/oozie-server/share && [[ `cat /usr/hdp/current/oozie-server/.hashcode` == '046a880c90fcbbfea52bec80cb88dd8f' ]]", 'sudo': True}
2016-03-18 19:21:08,186 - Execute[('cp', '/usr/share/HDP-oozie/ext-2.2.zip', '/usr/hdp/current/oozie-server/libext')] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 'sudo': True}
2016-03-18 19:21:08,251 - Execute[('chown', u'oozie:hadoop', '/usr/hdp/current/oozie-server/libext/ext-2.2.zip')] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 'sudo': True}
2016-03-18 19:21:08,311 - Execute[('chown', '-RL', u'oozie:hadoop', '/usr/hdp/current/oozie-server/oozie-server/conf')] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'", 'sudo': True}
2016-03-18 19:21:08,370 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://namenode.teg:8081/resources//mysql-jdbc-driver.jar')}
2016-03-18 19:21:08,370 - Not downloading the file from http://namenode.teg:8081/resources//mysql-jdbc-driver.jar, because /var/lib/ambari-agent/tmp/mysql-jdbc-driver.jar already exists
2016-03-18 19:21:08,371 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/oozie-server/libext/mysql-connector-java.jar')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2016-03-18 19:21:08,387 - File['/usr/hdp/current/oozie-server/libext/mysql-connector-java.jar'] {'owner': 'oozie', 'group': 'hadoop'}
2016-03-18 19:21:08,387 - Changing owner for /usr/hdp/current/oozie-server/libext/mysql-connector-java.jar from 0 to oozie
2016-03-18 19:21:08,387 - Changing group for /usr/hdp/current/oozie-server/libext/mysql-connector-java.jar from 0 to hadoop
2016-03-18 19:21:08,387 - Execute['ambari-sudo.sh cp /usr/hdp/current/falcon-client/oozie/ext/falcon-oozie-el-extension-*.jar /usr/hdp/current/oozie-server/libext'] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'"}
2016-03-18 19:21:08,446 - Execute['ambari-sudo.sh chown oozie:hadoop /usr/hdp/current/oozie-server/libext/falcon-oozie-el-extension-*.jar'] {'not_if': "ambari-sudo.sh su oozie -l -s /bin/bash -c 'ls /var/run/oozie/oozie.pid >/dev/null 2>&1 && ps -p `cat /var/run/oozie/oozie.pid` >/dev/null 2>&1'"}
2016-03-18 19:21:08,505 - Execute['cd /var/tmp/oozie && /usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war '] {'not_if': ..., 'user': 'oozie'}

Namenode : Check Pig

2016-03-18 19:21:10,662 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-03-18 19:21:10,664 - HdfsResource['/user/ambari-qa/pigsmoke.out'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['delete_on_execute']}
2016-03-18 19:21:10,666 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/pigsmoke.out?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpc5Zu4D 2>/tmp/tmpWPaqCg''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:10,722 - call returned (0, '')
2016-03-18 19:21:10,723 - HdfsResource['/user/ambari-qa/passwd'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute']}
2016-03-18 19:21:10,724 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/passwd?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpmxqs6c 2>/tmp/tmp2BKYHp''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:10,779 - call returned (0, '')
2016-03-18 19:21:10,780 - Creating new file /user/ambari-qa/passwd in DFS
2016-03-18 19:21:10,781 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT -T /etc/passwd '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/passwd?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmp_7MwDi 2>/tmp/tmpUS6Nii''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:10,856 - call returned (0, '')
2016-03-18 19:21:10,857 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/passwd?op=SETOWNER&user.name=hdfs&owner=ambari-qa&group='"'"' 1>/tmp/tmpnl2tO8 2>/tmp/tmphBUwxZ''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:10,911 - call returned (0, '')
2016-03-18 19:21:10,912 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2016-03-18 19:21:10,912 - File['/var/lib/ambari-agent/tmp/pigSmoke.sh'] {'content': StaticFile('pigSmoke.sh'), 'mode': 0755}
2016-03-18 19:21:10,914 - Writing File['/var/lib/ambari-agent/tmp/pigSmoke.sh'] because it doesn't exist
2016-03-18 19:21:10,914 - Changing permission for /var/lib/ambari-agent/tmp/pigSmoke.sh from 644 to 755
2016-03-18 19:21:10,915 - Execute['pig /var/lib/ambari-agent/tmp/pigSmoke.sh'] {'logoutput': True, 'path': ['/usr/hdp/current/pig-client/bin:/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
WARNING: Use "yarn jar" to launch YARN applications.
16/03/18 19:21:11 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
16/03/18 19:21:11 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
16/03/18 19:21:11 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2016-03-18 19:21:11,962 [main] INFO  org.apache.pig.Main - Apache Pig version 0.15.0.2.4.0.0-169 (rexported) compiled Feb 10 2016, 07:50:04
2016-03-18 19:21:11,962 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1458343271961.log
2016-03-18 19:21:12,475 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
2016-03-18 19:21:12,589 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://namenode.teg:8020
2016-03-18 19:21:12,978 [main] INFO  org.apache.pig.PigServer - Pig Script ID for the session: PIG-pigSmoke.sh-e15fef52-f48f-4b20-bbaf-4a7f63909107
2016-03-18 19:21:13,487 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:13,732 [main] INFO  org.apache.pig.backend.hadoop.ATSService - Created ATS Hook
2016-03-18 19:21:14,527 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2016-03-18 19:21:14,562 [main] INFO  org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2016-03-18 19:21:14,592 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2016-03-18 19:21:14,695 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2016-03-18 19:21:14,725 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2016-03-18 19:21:14,725 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2016-03-18 19:21:14,872 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:14,877 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:15,030 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2016-03-18 19:21:15,036 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2016-03-18 19:21:15,037 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
2016-03-18 19:21:15,252 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.4.0.0-169/pig/pig-0.15.0.2.4.0.0-169-core-h2.jar to DistributedCache through /tmp/temp-906566658/tmp-1799765657/pig-0.15.0.2.4.0.0-169-core-h2.jar
2016-03-18 19:21:15,670 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.4.0.0-169/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp-906566658/tmp1151836924/automaton-1.11-8.jar
2016-03-18 19:21:15,683 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.4.0.0-169/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp-906566658/tmp-495130784/antlr-runtime-3.4.jar
2016-03-18 19:21:15,704 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/joda-time-2.9.2.jar to DistributedCache through /tmp/temp-906566658/tmp1872190271/joda-time-2.9.2.jar
2016-03-18 19:21:15,742 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2016-03-18 19:21:15,750 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2016-03-18 19:21:15,750 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2016-03-18 19:21:15,750 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2016-03-18 19:21:15,789 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2016-03-18 19:21:15,918 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:15,918 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:16,158 [JobControl] WARN  org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2016-03-18 19:21:16,246 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-03-18 19:21:16,246 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2016-03-18 19:21:16,268 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2016-03-18 19:21:16,723 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2016-03-18 19:21:17,248 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1458343063472_0002
2016-03-18 19:21:17,360 [JobControl] INFO  org.apache.hadoop.mapred.YARNRunner - Job jar is not present. Not adding any jar to the list of resources.
2016-03-18 19:21:18,863 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1458343063472_0002
2016-03-18 19:21:18,935 [JobControl] INFO  org.apache.hadoop.mapreduce.Job - The url to track the job: http://namenode.teg:8088/proxy/application_1458343063472_0002/
2016-03-18 19:21:18,945 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1458343063472_0002
2016-03-18 19:21:18,945 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
2016-03-18 19:21:18,945 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[16,4],B[17,4] C:  R: 
2016-03-18 19:21:18,969 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2016-03-18 19:21:18,969 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1458343063472_0002]
2016-03-18 19:21:29,035 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2016-03-18 19:21:29,035 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1458343063472_0002]
2016-03-18 19:21:34,113 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:34,114 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:34,121 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:34,774 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:34,774 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:34,781 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:34,926 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:34,926 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:34,933 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:34,999 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-03-18 19:21:35,000 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics: 

HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.1.2.4.0.0-169 0.15.0.2.4.0.0-169 ambari-qa 2016-03-18 19:21:15 2016-03-18 19:21:34 UNKNOWN

Success!

Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1458343063472_0002 1 0 2 2 2 2 0 0 0 0 A,B MAP_ONLY hdfs://namenode.teg:8020/user/ambari-qa/pigsmoke.out,

Input(s):
Successfully read 49 records (2652 bytes) from: "hdfs://namenode.teg:8020/user/ambari-qa/passwd"

Output(s):
Successfully stored 49 records (328 bytes) in: "hdfs://namenode.teg:8020/user/ambari-qa/pigsmoke.out"

Counters:
Total records written : 49
Total bytes written : 328
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1458343063472_0002


2016-03-18 19:21:35,074 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:35,074 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:35,081 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:35,179 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:35,179 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:35,183 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:35,262 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:35,262 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:35,269 [main] INFO  org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2016-03-18 19:21:35,308 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2016-03-18 19:21:35,364 [main] INFO  org.apache.pig.Main - Pig script completed in 23 seconds and 593 milliseconds (23593 ms)
2016-03-18 19:21:38,735 - ExecuteHadoop['fs -test -e /user/ambari-qa/pigsmoke.out'] {'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'ambari-qa', 'conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2016-03-18 19:21:38,738 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf fs -test -e /user/ambari-qa/pigsmoke.out'] {'logoutput': None, 'try_sleep': 0, 'environment': {}, 'tries': 1, 'user': 'ambari-qa', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2016-03-18 19:21:40,773 - HdfsResource['/user/ambari-qa/pigsmoke.out'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['delete_on_execute']}
2016-03-18 19:21:40,773 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/pigsmoke.out?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpjJVSLC 2>/tmp/tmpm0IiMb''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:40,834 - call returned (0, '')
2016-03-18 19:21:40,835 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X DELETE '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/pigsmoke.out?op=DELETE&user.name=hdfs&recursive=True'"'"' 1>/tmp/tmpMD3uAW 2>/tmp/tmpqPl2oT''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:40,890 - call returned (0, '')
2016-03-18 19:21:40,891 - HdfsResource['/user/ambari-qa/passwd'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute']}
2016-03-18 19:21:40,892 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/user/ambari-qa/passwd?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp53RqKf 2>/tmp/tmp5xCvww''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:40,946 - call returned (0, '')
2016-03-18 19:21:40,947 - DFS file /user/ambari-qa/passwd is identical to /etc/passwd, skipping the copying
2016-03-18 19:21:40,947 - Called copy_to_hdfs tarball: tez
2016-03-18 19:21:40,947 - Default version is 2.4.0.0-169
2016-03-18 19:21:40,947 - Source file: /usr/hdp/2.4.0.0-169/tez/lib/tez.tar.gz , Dest file in HDFS: /hdp/apps/2.4.0.0-169/tez/tez.tar.gz
2016-03-18 19:21:40,948 - HdfsResource['/hdp/apps/2.4.0.0-169/tez'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0555}
2016-03-18 19:21:40,949 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/hdp/apps/2.4.0.0-169/tez?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpkLStT3 2>/tmp/tmpMHad1P''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:41,006 - call returned (0, '')
2016-03-18 19:21:41,007 - HdfsResource['/hdp/apps/2.4.0.0-169/tez/tez.tar.gz'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/usr/hdp/2.4.0.0-169/tez/lib/tez.tar.gz', 'default_fs': 'hdfs://namenode.teg:8020', 'replace_existing_files': False, 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'hdfs', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'mode': 0444}
2016-03-18 19:21:41,008 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://namenode.teg:50070/webhdfs/v1/hdp/apps/2.4.0.0-169/tez/tez.tar.gz?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpuVcQ3j 2>/tmp/tmpF4k0c8''] {'logoutput': None, 'quiet': False}
2016-03-18 19:21:41,062 - call returned (0, '')
2016-03-18 19:21:41,063 - DFS file /hdp/apps/2.4.0.0-169/tez/tez.tar.gz is identical to /usr/hdp/2.4.0.0-169/tez/lib/tez.tar.gz, skipping the copying
2016-03-18 19:21:41,063 - Will attempt to copy tez tarball from /usr/hdp/2.4.0.0-169/tez/lib/tez.tar.gz to DFS at /hdp/apps/2.4.0.0-169/tez/tez.tar.gz.
2016-03-18 19:21:41,063 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://namenode.teg:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2016-03-18 19:21:41,064 - Execute['pig -x tez /var/lib/ambari-agent/tmp/pigSmoke.sh'] {'logoutput': True, 'path': ['/usr/hdp/current/pig-client/bin:/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
WARNING: Use "yarn jar" to launch YARN applications.
16/03/18 19:21:42 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
16/03/18 19:21:42 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
16/03/18 19:21:42 INFO pig.ExecTypeProvider: Trying ExecType : TEZ_LOCAL
16/03/18 19:21:42 INFO pig.ExecTypeProvider: Trying ExecType : TEZ
16/03/18 19:21:42 INFO pig.ExecTypeProvider: Picked TEZ as the ExecType
2016-03-18 19:21:42,114 [main] INFO  org.apache.pig.Main - Apache Pig version 0.15.0.2.4.0.0-169 (rexported) compiled Feb 10 2016, 07:50:04
2016-03-18 19:21:42,114 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1458343302112.log
2016-03-18 19:21:42,626 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
2016-03-18 19:21:42,755 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://namenode.teg:8020
2016-03-18 19:21:43,284 [main] INFO  org.apache.pig.PigServer - Pig Script ID for the session: PIG-pigSmoke.sh-b178ae5a-b08f-44a7-b9d7-3732c4c6ae35
2016-03-18 19:21:43,789 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:44,019 [main] INFO  org.apache.pig.backend.hadoop.ATSService - Created ATS Hook
2016-03-18 19:21:44,804 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2016-03-18 19:21:44,838 [main] INFO  org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2016-03-18 19:21:44,869 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2016-03-18 19:21:45,008 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - Tez staging directory is /tmp/ambari-qa/staging and resources directory is /tmp/temp717009601
2016-03-18 19:21:45,057 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.plan.TezCompiler - File concatenation threshold: 100 optimistic? false
2016-03-18 19:21:45,249 [main] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-03-18 19:21:45,250 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2016-03-18 19:21:45,314 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2016-03-18 19:21:45,700 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: pig-0.15.0.2.4.0.0-169-core-h2.jar
2016-03-18 19:21:45,700 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: antlr-runtime-3.4.jar
2016-03-18 19:21:45,700 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: automaton-1.11-8.jar
2016-03-18 19:21:45,700 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - Local resource: joda-time-2.9.2.jar
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.io.sort.mb to 859 from MR setting mapreduce.task.io.sort.mb
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.read.timeout to 180000 from MR setting mapreduce.reduce.shuffle.read.timeout
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.ifile.readahead.bytes to 4194304 from MR setting mapreduce.ifile.readahead.bytes
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.ssl.enable to false from MR setting mapreduce.shuffle.ssl.enabled
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.sort.spill.percent to 0.7 from MR setting mapreduce.map.sort.spill.percent
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.ifile.readahead to true from MR setting mapreduce.ifile.readahead
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.merge.percent to 0.66 from MR setting mapreduce.reduce.shuffle.merge.percent
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.parallel.copies to 30 from MR setting mapreduce.reduce.shuffle.parallelcopies
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.memory.limit.percent to 0.25 from MR setting mapreduce.reduce.shuffle.memory.limit.percent
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.io.sort.factor to 100 from MR setting mapreduce.task.io.sort.factor
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.compress to false from MR setting mapreduce.map.output.compress
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.connect.timeout to 180000 from MR setting mapreduce.reduce.shuffle.connect.timeout
2016-03-18 19:21:45,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.task.input.post-merge.buffer.percent to 0.0 from MR setting mapreduce.reduce.input.buffer.percent
2016-03-18 19:21:45,820 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.compress.codec to org.apache.hadoop.io.compress.DefaultCodec from MR setting mapreduce.map.output.compress.codec
2016-03-18 19:21:45,820 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.merge.progress.records to 10000 from MR setting mapreduce.task.merge.progress.records
2016-03-18 19:21:45,820 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.internal.sorter.class to org.apache.hadoop.util.QuickSort from MR setting map.sort.class
2016-03-18 19:21:45,820 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.util.MRToTezHelper - Setting tez.runtime.shuffle.fetch.buffer.percent to 0.7 from MR setting mapreduce.reduce.shuffle.input.buffer.percent
2016-03-18 19:21:45,893 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJobCompiler - For vertex - scope-5: parallelism=1, memory=1536, java opts=-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB
2016-03-18 19:21:46,046 [PigTezLauncher-0] INFO  org.apache.pig.tools.pigstats.tez.TezScriptState - Pig script settings are added to the job
2016-03-18 19:21:46,163 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Tez Client Version: [ component=tez-api, version=0.7.0.2.4.0.0-169, revision=3c1431f45faaca982ecc8dad13a107787b834696, SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, buildTime=20160210-0711 ]
2016-03-18 19:21:46,320 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:46,403 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:46,559 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Using org.apache.tez.dag.history.ats.acls.ATSV15HistoryACLPolicyManager to manage Timeline ACLs
2016-03-18 19:21:46,660 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:46,663 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Session mode. Starting session.
2016-03-18 19:21:46,667 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClientUtils - Using tez.lib.uris value from configuration: /hdp/apps/2.4.0.0-169/tez/tez.tar.gz
2016-03-18 19:21:46,709 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Stage directory /tmp/ambari-qa/staging doesn't exist and is created
2016-03-18 19:21:46,727 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Tez system stage directory hdfs://namenode.teg:8020/tmp/ambari-qa/staging/.tez/application_1458343063472_0003 doesn't exist and is created
2016-03-18 19:21:46,745 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Writing domains for appattempt_1458343063472_0003_000001 to /ats/active/application_1458343063472_0003/appattempt_1458343063472_0003_000001/domainlog-appattempt_1458343063472_0003_000001
2016-03-18 19:21:46,757 [PigTezLauncher-0] INFO  org.apache.tez.dag.history.ats.acls.ATSV15HistoryACLPolicyManager - Created Timeline Domain for History ACLs, domainId=Tez_ATS_application_1458343063472_0003
2016-03-18 19:21:47,479 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1458343063472_0003
2016-03-18 19:21:47,483 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - The url to track the Tez Session: http://namenode.teg:8088/proxy/application_1458343063472_0003/
2016-03-18 19:21:52,143 [PigTezLauncher-0] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitting DAG PigLatin:pigSmoke.sh-0_scope-0
2016-03-18 19:21:52,143 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Submitting dag to TezSession, sessionName=PigLatin:pigSmoke.sh, applicationId=application_1458343063472_0003, dagName=PigLatin:pigSmoke.sh-0_scope-0, callerContext={ context=PIG, callerType=PIG_SCRIPT_ID, callerId=PIG-pigSmoke.sh-b178ae5a-b08f-44a7-b9d7-3732c4c6ae35 }
2016-03-18 19:21:52,364 [PigTezLauncher-0] INFO  org.apache.tez.client.TezClient - Submitted dag to TezSession, sessionName=PigLatin:pigSmoke.sh, applicationId=application_1458343063472_0003, dagName=PigLatin:pigSmoke.sh-0_scope-0
2016-03-18 19:21:52,483 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://namenode.teg:8188/ws/v1/timeline/
2016-03-18 19:21:52,483 [PigTezLauncher-0] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at namenode.teg/172.30.1.135:8050
2016-03-18 19:21:52,486 [PigTezLauncher-0] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJob - Submitted DAG PigLatin:pigSmoke.sh-0_scope-0. Application id: application_1458343063472_0003
2016-03-18 19:21:53,007 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - HadoopJobId: job_1458343063472_0003
2016-03-18 19:21:53,487 [Timer-0] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=RUNNING, progress=TotalTasks: 1 Succeeded: 0 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=null
2016-03-18 19:21:56,653 [PigTezLauncher-0] INFO  org.apache.tez.common.counters.Limits - Counter limits initialized with parameters:  GROUP_NAME_MAX=256, MAX_GROUPS=3000, COUNTER_NAME_MAX=64, MAX_COUNTERS=10000
2016-03-18 19:21:56,656 [PigTezLauncher-0] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=SUCCEEDED, progress=TotalTasks: 1 Succeeded: 1 Running: 0 Failed: 0 Killed: 0, diagnostics=, counters=Counters: 19
 org.apache.tez.common.counters.DAGCounter
  NUM_SUCCEEDED_TASKS=1
  TOTAL_LAUNCHED_TASKS=1
  DATA_LOCAL_TASKS=1
  AM_CPU_MILLISECONDS=1570
  AM_GC_TIME_MILLIS=13
 File System Counters
  HDFS_BYTES_READ=2279
  HDFS_BYTES_WRITTEN=328
  HDFS_READ_OPS=4
  HDFS_LARGE_READ_OPS=0
  HDFS_WRITE_OPS=2
 org.apache.tez.common.counters.TaskCounter
  GC_TIME_MILLIS=33
  CPU_MILLISECONDS=2990
  PHYSICAL_MEMORY_BYTES=257949696
  VIRTUAL_MEMORY_BYTES=3608465408
  COMMITTED_HEAP_BYTES=257949696
  INPUT_RECORDS_PROCESSED=49
  OUTPUT_RECORDS=49
 TaskCounter_scope_5_INPUT_scope_0
  INPUT_RECORDS_PROCESSED=49
 TaskCounter_scope_5_OUTPUT_scope_4
  OUTPUT_RECORDS=49
2016-03-18 19:21:57,013 [main] INFO  org.apache.pig.tools.pigstats.tez.TezPigScriptStats - Script Statistics:

       HadoopVersion: 2.7.1.2.4.0.0-169                                                                                   
          PigVersion: 0.15.0.2.4.0.0-169                                                                                  
          TezVersion: 0.7.0.2.4.0.0-169                                                                                   
              UserId: ambari-qa                                                                                           
            FileName: /var/lib/ambari-agent/tmp/pigSmoke.sh                                                               
           StartedAt: 2016-03-18 19:21:45                                                                                 
          FinishedAt: 2016-03-18 19:21:57                                                                                 
            Features: UNKNOWN                                                                                             

Success!

DAG PigLatin:pigSmoke.sh-0_scope-0:
       ApplicationId: job_1458343063472_0003                                                                              
  TotalLaunchedTasks: 1                                                                                                   
       FileBytesRead: 0                                                                                                   
    FileBytesWritten: 0                                                                                                   
       HdfsBytesRead: 2279                                                                                                
    HdfsBytesWritten: 328                                                                                                 

Input(s):
Successfully read 49 records (2279 bytes) from: "hdfs://namenode.teg:8020/user/ambari-qa/passwd"

Output(s):
Successfully stored 49 records (328 bytes) in: "hdfs://namenode.teg:8020/user/ambari-qa/pigsmoke.out"

2016-03-18 19:21:57,042 [main] INFO  org.apache.pig.Main - Pig script completed in 15 seconds and 168 milliseconds (15168 ms)
2016-03-18 19:21:57,042 [main] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezLauncher - Shutting down thread pool
2016-03-18 19:21:57,059 [Thread-18] INFO  org.apache.pig.backend.hadoop.executionengine.tez.TezSessionManager - Shutting down Tez session org.apache.tez.client.TezClient@697ae4f2
2016-03-18 19:21:57,069 [Thread-18] INFO  org.apache.tez.client.TezClient - Shutting down Tez Session, sessionName=PigLatin:pigSmoke.sh, applicationId=application_1458343063472_0003
2016-03-18 19:22:00,377 - ExecuteHadoop['fs -test -e /user/ambari-qa/pigsmoke.out'] {'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'ambari-qa', 'conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2016-03-18 19:22:00,378 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf fs -test -e /user/ambari-qa/pigsmoke.out'] {'logoutput': None, 'try_sleep': 0, 'environment': {}, 'tries': 1, 'user': 'ambari-qa', 'path': ['/usr/hdp/current/hadoop-client/bin']}
                  

              

          

      
    

  



    
  

  
    
      
        
        
        
        
          

Datanode: Warning from flume start and check flume end . No logs were posted for these two

avatar
Expert Contributor

The status as of now is :

Hive - Webcat server

Oozie - as per the error above.issue with ssl and https.

Falcon - falcon server

flume - no error or warning but on the data node.

I look forward to comments/recommendations please.

2899-capture.png

avatar
Expert Contributor

This is resolved.

Possible Cause

The main problem was the oozie not finding "/etc/tomcat/conf/ssl/server.xml". The oozie server has it own app-server; it should not therefore refer / conflict with the tomcat app server, which have deployed for our own purpose.

setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

It did however refer to /etc/tomcat. We had configurations settings at .bashrc, /etc/profile and /etc/init.d/tomcat re-Catalina Base and Catalina_Home.

The oozie-setup.sh has references to Catalina_Base in many places. This may be the reason why it was referring to the wrong path.

Solution:

Code walk through on the shell files of oozie and other services, which did not start.

Commented references to Catalina_Home and Catalina_Base in /etc/profile and etc/init/d/tomcat.

Impact:

All hadoop services have started

Caution

Users who may want to run Tomcat app server on the same server as Hadoop could create conflict if configurations for tomcat app server is set in the /etc/profile and etc/init.d/tomcat.

The app server may either need to be run on a separate server than on the same server as oozie or enable user specific permission only through .bashrc.