Reply
Highlighted
Expert Contributor
Posts: 64
Registered: ‎11-24-2017

Hive2 action fails without any evident error in logs

[ Edited ]

Hello everyone,

I have an Oozie workflow generated programmatically which performs several sqoop and hive2 actions (in a kerberized cluster). Sometimes, without any evident reason, an action that is always succeded fails. I've checked the yarn log but it does not tell me anything special, just an intercepting of System.exit[2]:

 

 

Container: container_e11_1523145714907_1686_01_000002 on trwor-7d01cf08.azcloud.local_8041
============================================================================================
LogType:stderr
Log Upload Time:Tue Apr 10 21:20:11 +0000 2018
LogLength:1156
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.13.2-1.cdh5.13.2.p0.3/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data4/yarn/filecache/555/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Connecting to jdbc:hive2://trmas-fc2d552a.azcloud.local:10000/default;ssl=true
Connected to: Apache Hive (version 1.1.0-cdh5.13.2)
Driver: Hive JDBC (version 1.1.0-cdh5.13.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.085 seconds)
No rows affected (0.047 seconds)
No rows affected (0.17 seconds)
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=1)
Closing: 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:10000/default;ssl=true
Intercepting System.exit(2)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]

LogType:stdout
Log Upload Time:Tue Apr 10 21:20:11 +0000 2018
LogLength:120824
Log Contents:

Oozie Launcher starts

Heart beat
{"properties":[{"key":"oozie.launcher.job.id","value":"job_1523145714907_1686","isFinal":false,"resource":"programatically"},{"key":"oozie.job.id","value":"0000622-180408000321809-oozie-oozi-W","isFinal":false,"resource":"programatically"},{"key":"oozie.action.id","value":"0000622-180408000321809-oozie-oozi-W@DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE_ACTION","isFinal":false,"resource":"programatically"},{"key":"mapreduce.job.tags","value":"oozie-132d4c981da77a42fcf1f549b2126e25","isFinal":false,"resource":"programatically"}]}Starting the execution of prepare actions
Completed the execution of prepare actions successfully

[...]

Main class        : org.apache.oozie.action.hadoop.Hive2Main

Maximum output    : 2048

Arguments         :

Java System Properties:
------------------------

------------------------

=================================================================

>>> Invoking Main class now >>>

INFO: loading log4j config file log4j.properties.
INFO: log4j config file log4j.properties loaded successfully.

Oozie Hive 2 action configuration
=================================================================

Using action configuration file /data1/yarn/usercache/icon0104/appcache/application_1523145714907_1686/container_e11_1523145714907_1686_01_000002/action.xml
------------------------
Setting env property for mapreduce.job.credentials.binary to: /data1/yarn/usercache/icon0104/appcache/application_1523145714907_1686/container_e11_1523145714907_1686_01_000002/container_tokens
------------------------
Current (local) dir = /data1/yarn/usercache/icon0104/appcache/application_1523145714907_1686/container_e11_1523145714907_1686_01_000002

[...] Script [DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE.hql] content: ------------------------ SET hive.exec.compress.output=true; SET avro.output.codec=snappy; DROP TABLE IF EXISTS ${swamp_db}.VPNRELCTKT; CREATE TABLE ${swamp_db}.VPNRELCTKT STORED AS AVRO LOCATION '${nameNode}/user/${azure_username}/data/swamp/teradata/VPNRELCTKT' TBLPROPERTIES("avro.output.codec"="snappy") AS SELECT TRIM(CODARLPFX) AS CODARLPFX, TRIM(DATPNRCRE) AS DATPNRCRE, TRIM(NAMFSTNAMPAX) AS NAMFSTNAMPAX, CODFAXIDX, CODSEGID0, TRIM(CODFLTNUM) AS CODFLTNUM, TRIM(CODOFFPNT) AS CODOFFPNT, CODPAXID0, TRIM(CODBRDPNT) AS CODBRDPNT, TRIM(CODARLITA) AS CODARLITA, TRIM(NAMSURPAX) AS NAMSURPAX, TRIM(CODTKTNUM) AS CODTKTNUM, TRIM(TMSLSTMOD) AS TMSLSTMOD, CODTRVTRA, TRIM(CODCPNNUM) AS CODCPNNUM, TRIM(CODPNRREF) AS CODPNRREF, TRIM(CODCLADSG) AS CODCLADSG, TRIM(DATDEP) AS DATDEP FROM ${swamp_staging_db}.TMP_VPNRELCTKT; DROP TABLE IF EXISTS ${swamp_staging_db}.TMP_VPNRELCTKT; ------------------------ Parameters: ------------------------ swamp_db=swamp_db swamp_staging_db=swamp_staging_db nameNode=hdfs://trmas-c9471d78.azcloud.local:8020 azure_username=icon0104 ------------------------ Beeline command arguments : -u jdbc:hive2://trmas-fc2d552a.azcloud.local:10000/default;ssl=true -n icon0104 -p DUMMY -d org.apache.hive.jdbc.HiveDriver -f DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE.hql --hivevar swamp_db=swamp_db --hivevar swamp_staging_db=swamp_staging_db --hivevar nameNode=hdfs://trmas-c9471d78.azcloud.local:8020 --hivevar azure_username=icon0104 -a delegationToken --hiveconf mapreduce.job.tags=oozie-132d4c981da77a42fcf1f549b2126e25 --hiveconf oozie.action.id=0000622-180408000321809-oozie-oozi-W@DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE_ACTION --hiveconf oozie.HadoopAccessorService.created=true --hiveconf oozie.job.id=0000622-180408000321809-oozie-oozi-W --hiveconf oozie.action.rootlogger.log.level=INFO --hiveconf oozie.child.mapreduce.job.tags=oozie-132d4c981da77a42fcf1f549b2126e25 Fetching child yarn jobs tag id : oozie-132d4c981da77a42fcf1f549b2126e25 Child yarn jobs are found - ================================================================= >>> Invoking Beeline command line now >>> 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> SET hive.exec.compress.output=tru e; 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> SET avro.output.codec=snappy; 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> DROP TABLE IF EXISTS ${swamp_db}. VPNRELCTKT; 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> 0: jdbc:hive2://trmas-fc2d552a.azcloud.local:> CREATE TABLE ${swamp_db}.VPNRELCT KT . . . . . . . . . . . . . . . . . . . . . . .> STORED AS AVRO . . . . . . . . . . . . . . . . . . . . . . .> LOCATION '${nameNode}/user/${azur e_username}/data/swamp/teradata/VPNRELCTKT' . . . . . . . . . . . . . . . . . . . . . . .> TBLPROPERTIES("avro.output.codec" ="snappy") . . . . . . . . . . . . . . . . . . . . . . .> AS SELECT . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODARLPFX) AS CODARLPFX, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(DATPNRCRE) AS DATPNRCRE, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(NAMFSTNAMPAX) AS NAMFSTNAMP AX, . . . . . . . . . . . . . . . . . . . . . . .> CODFAXIDX, . . . . . . . . . . . . . . . . . . . . . . .> CODSEGID0, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODFLTNUM) AS CODFLTNUM, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODOFFPNT) AS CODOFFPNT, . . . . . . . . . . . . . . . . . . . . . . .> CODPAXID0, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODBRDPNT) AS CODBRDPNT, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODARLITA) AS CODARLITA, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(NAMSURPAX) AS NAMSURPAX, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODTKTNUM) AS CODTKTNUM, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(TMSLSTMOD) AS TMSLSTMOD, . . . . . . . . . . . . . . . . . . . . . . .> CODTRVTRA, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODCPNNUM) AS CODCPNNUM, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODPNRREF) AS CODPNRREF, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(CODCLADSG) AS CODCLADSG, . . . . . . . . . . . . . . . . . . . . . . .> TRIM(DATDEP) AS DATDEP . . . . . . . . . . . . . . . . . . . . . . .> FROM ${swamp_staging_db}.TMP_VPNR ELCTKT; <<< Invocation of Beeline command completed <<< No child hadoop job is executed. Intercepting System.exit(2) <<< Invocation of Main class completed <<< Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2] Oozie Launcher failed, finishing Hadoop job gracefully Oozie Launcher, uploading action data to HDFS sequence file: hdfs://trmas-c9471d78.azcloud.local:8020/user/icon0104/oozie-oozi/0000622-180408000321809-oozie-oozi-W/DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE_ACTION--hive2/action-data.seq Successfully reset security manager from org.apache.oozie.action.hadoop.LauncherSecurityManager@7f27f54a to null Oozie Launcher ends LogType:syslog Log Upload Time:Tue Apr 10 21:20:11 +0000 2018 LogLength:5106 Log Contents: 2018-04-10 21:19:58,276 WARN [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Metrics system not started: org.apache.commons.configuration.ConfigurationException: Unable to load the configuration from the URL file:/run/cloudera-scm-agent/process/825-yarn-NODEMANAGER/hadoop-metrics2.properties 2018-04-10 21:19:58,330 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: 2018-04-10 21:19:58,330 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: kms-dt, Service: 10.60.4.88:16000, Ident: (kms-dt owner=icon0104, renewer=yarn, realUser=oozie, issueDate=1523395180973, maxDate=1523999980973, sequenceNumber=10523, masterKeyId=51) 2018-04-10 21:19:58,582 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: HIVE_DELEGATION_TOKEN, Service: hiveserver2ClientToken, Ident: 00 08 69 63 6f 6e 30 31 30 34 04 68 69 76 65 2f 68 69 76 65 2f 74 72 6d 61 73 2d 66 63 32 64 35 35 32 61 2e 61 7a 63 6c 6f 75 64 2e 6c 6f 63 61 6c 40 41 5a 43 4c 4f 55 44 2e 4c 4f 43 41 4c 8a 01 62 b1 6d b7 df 8a 01 62 d5 7a 3b df 8e 02 81 03 2018-04-10 21:19:58,583 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1523145714907_1686, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@4b247c95) 2018-04-10 21:19:58,583 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: HDFS_DELEGATION_TOKEN, Service: 10.60.4.82:8020, Ident: (token for icon0104: HDFS_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395180887, maxDate=1523999980887, sequenceNumber=10692, masterKeyId=36) 2018-04-10 21:19:58,583 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hanameservice, Ident: (token for icon0104: HDFS_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395181246, maxDate=1523999981246, sequenceNumber=10693, masterKeyId=36) 2018-04-10 21:19:58,584 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: MR_DELEGATION_TOKEN, Service: 10.60.4.87:10020, Ident: (MR_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395181545, maxDate=1523999981545, sequenceNumber=1051, masterKeyId=4) 2018-04-10 21:19:58,584 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: RM_DELEGATION_TOKEN, Service: 10.60.4.86:8032,10.60.4.82:8032, Ident: (RM_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395180807, maxDate=1523999980807, sequenceNumber=18771, masterKeyId=46) 2018-04-10 21:19:58,584 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: kms-dt, Service: 10.60.4.89:16000, Ident: (kms-dt owner=icon0104, renewer=yarn, realUser=oozie, issueDate=1523395180919, maxDate=1523999980919, sequenceNumber=10522, masterKeyId=52) 2018-04-10 21:19:58,700 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2018-04-10 21:19:59,090 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /data1/yarn/usercache/icon0104/appcache/application_1523145714907_1686,/data2/yarn/usercache/icon0104/appcache/application_1523145714907_1686,/data3/yarn/usercache/icon0104/appcache/application_1523145714907_1686,/data4/yarn/usercache/icon0104/appcache/application_1523145714907_1686 2018-04-10 21:19:59,600 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 2018-04-10 21:20:00,205 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2018-04-10 21:20:00,569 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit@2ae58f93 2018-04-10 21:20:00,575 INFO [main] org.apache.hadoop.mapred.MapTask: numReduceTasks: 0 2018-04-10 21:20:00,593 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id 2018-04-10 21:20:01,036 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name 2018-04-10 21:20:01,461 INFO [main] org.apache.hive.jdbc.Utils: Supplied authorities: trmas-fc2d552a.azcloud.local:10000 2018-04-10 21:20:01,461 INFO [main] org.apache.hive.jdbc.Utils: Resolved authority: trmas-fc2d552a.azcloud.local:10000 2018-04-10 21:20:04,073 INFO [main] org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 2018-04-10 21:20:04,074 INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.deflate] 2018-04-10 21:20:04,181 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1523145714907_1686_m_000000_0 is done. And is in the process of committing 2018-04-10 21:20:04,358 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1523145714907_1686_m_000000_0' done. 2018-04-10 21:20:04,459 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. LogType:container-localizer-syslog Log Upload Time:Tue Apr 10 21:20:11 +0000 2018 LogLength:0 Log Contents: Container: container_e11_1523145714907_1686_01_000001 on trwor-dafb587f.azcloud.local_8041 ============================================================================================ LogType:stderr Log Upload Time:Tue Apr 10 21:20:12 +0000 2018 LogLength:2758 Log Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.13.2-1.cdh5.13.2.p0.3/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data2/yarn/filecache/438/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Apr 10, 2018 9:20:01 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information. Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class Apr 10, 2018 9:20:02 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Apr 10, 2018 9:20:02 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest" log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. LogType:stdout Log Upload Time:Tue Apr 10 21:20:12 +0000 2018 LogLength:0 Log Contents: LogType:syslog Log Upload Time:Tue Apr 10 21:20:12 +0000 2018 LogLength:27766 Log Contents: 2018-04-10 21:19:46,714 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1523145714907_1686_000001 2018-04-10 21:19:47,152 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens: 2018-04-10 21:19:47,153 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: kms-dt, Service: 10.60.4.88:16000, Ident: (kms-dt owner=icon0104, renewer=yarn, realUser=oozie, issueDate=1523395180973, maxDate=1523999980973, sequenceNumber=10523, masterKeyId=51) 2018-04-10 21:19:47,389 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@737612) 2018-04-10 21:19:47,389 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: HIVE_DELEGATION_TOKEN, Service: hiveserver2ClientToken, Ident: 00 08 69 63 6f 6e 30 31 30 34 04 68 69 76 65 2f 68 69 76 65 2f 74 72 6d 61 73 2d 66 63 32 64 35 35 32 61 2e 61 7a 63 6c 6f 75 64 2e 6c 6f 63 61 6c 40 41 5a 43 4c 4f 55 44 2e 4c 4f 43 41 4c 8a 01 62 b1 6d b7 df 8a 01 62 d5 7a 3b df 8e 02 81 03 2018-04-10 21:19:47,390 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: HDFS_DELEGATION_TOKEN, Service: 10.60.4.82:8020, Ident: (token for icon0104: HDFS_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395180887, maxDate=1523999980887, sequenceNumber=10692, masterKeyId=36) 2018-04-10 21:19:47,390 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hanameservice, Ident: (token for icon0104: HDFS_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395181246, maxDate=1523999981246, sequenceNumber=10693, masterKeyId=36) 2018-04-10 21:19:47,391 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: MR_DELEGATION_TOKEN, Service: 10.60.4.87:10020, Ident: (MR_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395181545, maxDate=1523999981545, sequenceNumber=1051, masterKeyId=4) 2018-04-10 21:19:47,391 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, Service: 10.60.4.86:8032,10.60.4.82:8032, Ident: (RM_DELEGATION_TOKEN owner=icon0104, renewer=yarn, realUser=oozie/trmas-fc2d552a.azcloud.local@AZCLOUD.LOCAL, issueDate=1523395180807, maxDate=1523999980807, sequenceNumber=18771, masterKeyId=46) 2018-04-10 21:19:47,391 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: kms-dt, Service: 10.60.4.89:16000, Ident: (kms-dt owner=icon0104, renewer=yarn, realUser=oozie, issueDate=1523395180919, maxDate=1523999980919, sequenceNumber=10522, masterKeyId=52) 2018-04-10 21:19:47,415 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter 2018-04-10 21:19:47,417 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter 2018-04-10 21:19:48,051 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-04-10 21:19:48,335 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler 2018-04-10 21:19:48,336 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher 2018-04-10 21:19:48,337 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher 2018-04-10 21:19:48,338 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher 2018-04-10 21:19:48,339 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler 2018-04-10 21:19:48,340 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher 2018-04-10 21:19:48,340 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter 2018-04-10 21:19:48,342 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter 2018-04-10 21:19:48,418 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hanameservice:8020] 2018-04-10 21:19:48,461 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hanameservice:8020] 2018-04-10 21:19:48,503 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hanameservice:8020] 2018-04-10 21:19:48,531 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled 2018-04-10 21:19:48,589 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler 2018-04-10 21:19:48,854 WARN [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Metrics system not started: org.apache.commons.configuration.ConfigurationException: Unable to load the configuration from the URL file:/run/cloudera-scm-agent/process/823-yarn-NODEMANAGER/hadoop-metrics2.properties 2018-04-10 21:19:48,913 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1523145714907_1686 to jobTokenSecretManager 2018-04-10 21:19:49,103 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1523145714907_1686 because: not enabled; 2018-04-10 21:19:49,129 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1523145714907_1686 = 0. Number of splits = 1 2018-04-10 21:19:49,129 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1523145714907_1686 = 0 2018-04-10 21:19:49,129 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1523145714907_1686Job Transitioned from NEW to INITED 2018-04-10 21:19:49,131 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1523145714907_1686. 2018-04-10 21:19:49,163 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100 2018-04-10 21:19:49,171 INFO [Socket Reader #1 for port 43275] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43275 2018-04-10 21:19:49,189 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server 2018-04-10 21:19:49,220 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-04-10 21:19:49,220 INFO [IPC Server listener on 43275] org.apache.hadoop.ipc.Server: IPC Server listener on 43275: starting 2018-04-10 21:19:49,221 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at trwor-dafb587f.azcloud.local/10.60.4.80:43275 2018-04-10 21:19:49,287 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2018-04-10 21:19:49,294 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2018-04-10 21:19:49,299 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined 2018-04-10 21:19:49,309 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2018-04-10 21:19:49,339 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce 2018-04-10 21:19:49,339 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static 2018-04-10 21:19:49,342 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/* 2018-04-10 21:19:49,343 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/* 2018-04-10 21:19:49,353 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 34888 2018-04-10 21:19:49,353 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4 2018-04-10 21:19:49,391 INFO [main] org.mortbay.log: Extract jar:file:/opt/cloudera/parcels/CDH-5.13.2-1.cdh5.13.2.p0.3/jars/hadoop-yarn-common-2.6.0-cdh5.13.2.jar!/webapps/mapreduce to ./tmp/Jetty_0_0_0_0_34888_mapreduce____5edk2q/webapp 2018-04-10 21:19:49,752 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:34888 2018-04-10 21:19:49,752 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 34888 2018-04-10 21:19:50,106 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2018-04-10 21:19:50,112 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000 2018-04-10 21:19:50,113 INFO [Socket Reader #1 for port 46033] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 46033 2018-04-10 21:19:50,159 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-04-10 21:19:50,159 INFO [IPC Server listener on 46033] org.apache.hadoop.ipc.Server: IPC Server listener on 46033: starting 2018-04-10 21:19:50,207 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true 2018-04-10 21:19:50,207 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3 2018-04-10 21:19:50,207 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33 2018-04-10 21:19:50,316 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:16491, vCores:16> 2018-04-10 21:19:50,316 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.users.icon0104 2018-04-10 21:19:50,319 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500 2018-04-10 21:19:50,319 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10 2018-04-10 21:19:50,334 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1523145714907_1686Job Transitioned from INITED to SETUP 2018-04-10 21:19:50,336 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP 2018-04-10 21:19:50,338 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1523145714907_1686Job Transitioned from SETUP to RUNNING 2018-04-10 21:19:50,397 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1523145714907_1686_m_000000 Task Transitioned from NEW to SCHEDULED 2018-04-10 21:19:50,398 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1523145714907_1686_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2018-04-10 21:19:50,400 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1024, vCores:1> 2018-04-10 21:19:50,422 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1523145714907_1686, File: hdfs://hanameservice:8020/user/icon0104/.staging/job_1523145714907_1686/job_1523145714907_1686_1.jhist 2018-04-10 21:19:50,877 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hanameservice:8020] 2018-04-10 21:19:51,322 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0 2018-04-10 21:19:51,358 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1523145714907_1686: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:39340, vCores:38> knownNMs=4 2018-04-10 21:19:52,368 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1 2018-04-10 21:19:52,392 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_e11_1523145714907_1686_01_000002 to attempt_1523145714907_1686_m_000000_0 2018-04-10 21:19:52,393 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2018-04-10 21:19:52,445 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources. 2018-04-10 21:19:52,467 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/icon0104/.staging/job_1523145714907_1686/job.xml 2018-04-10 21:19:52,786 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #7 tokens and #3 secret keys for NM use for launching container 2018-04-10 21:19:52,786 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 8 2018-04-10 21:19:52,786 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData 2018-04-10 21:19:53,424 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m 2018-04-10 21:19:53,427 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1523145714907_1686_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2018-04-10 21:19:53,429 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1523145714907_1686: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:39340, vCores:38> knownNMs=4 2018-04-10 21:19:53,431 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_e11_1523145714907_1686_01_000002 taskAttempt attempt_1523145714907_1686_m_000000_0 2018-04-10 21:19:53,433 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1523145714907_1686_m_000000_0 2018-04-10 21:19:53,512 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1523145714907_1686_m_000000_0 : 13562 2018-04-10 21:19:53,515 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1523145714907_1686_m_000000_0] using containerId: [container_e11_1523145714907_1686_01_000002 on NM: [trwor-7d01cf08.azcloud.local:8041] 2018-04-10 21:19:53,518 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1523145714907_1686_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2018-04-10 21:19:53,518 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1523145714907_1686_m_000000 Task Transitioned from SCHEDULED to RUNNING 2018-04-10 21:19:58,899 INFO [Socket Reader #1 for port 46033] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1523145714907_1686 (auth:SIMPLE) 2018-04-10 21:19:58,940 INFO [Socket Reader #1 for port 46033] SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1523145714907_1686 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol 2018-04-10 21:19:58,947 INFO [IPC Server handler 0 on 46033] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1523145714907_1686_m_12094627905538 asked for a task 2018-04-10 21:19:58,948 INFO [IPC Server handler 0 on 46033] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1523145714907_1686_m_12094627905538 given task: attempt_1523145714907_1686_m_000000_0 2018-04-10 21:20:04,168 INFO [IPC Server handler 1 on 46033] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1523145714907_1686_m_000000_0 is : 0.0 2018-04-10 21:20:04,345 INFO [IPC Server handler 0 on 46033] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1523145714907_1686_m_000000_0 is : 1.0 2018-04-10 21:20:04,354 INFO [IPC Server handler 2 on 46033] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1523145714907_1686_m_000000_0 2018-04-10 21:20:04,357 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1523145714907_1686_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2018-04-10 21:20:04,364 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1523145714907_1686_m_000000_0 2018-04-10 21:20:04,366 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1523145714907_1686_m_000000 Task Transitioned from RUNNING to SUCCEEDED 2018-04-10 21:20:04,368 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1 2018-04-10 21:20:04,368 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1523145714907_1686Job Transitioned from RUNNING to COMMITTING 2018-04-10 21:20:04,369 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT 2018-04-10 21:20:04,433 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 2018-04-10 21:20:04,434 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1523145714907_1686Job Transitioned from COMMITTING to SUCCEEDED 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services 2018-04-10 21:20:04,435 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 2 2018-04-10 21:20:04,443 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event TASK_FINISHED 2018-04-10 21:20:04,448 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_FINISHED 2018-04-10 21:20:04,454 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2018-04-10 21:20:04,760 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hanameservice:8020/user/icon0104/.staging/job_1523145714907_1686/job_1523145714907_1686_1.jhist to hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686-1523395182119-icon0104-oozie%3Alauncher%3AT%3Dhive2%3AW%3Dswamp_teradata_consolidat-1523395204432-1-0-SUCCEEDED-root.users.icon0104-1523395190329.jhist_tmp 2018-04-10 21:20:04,886 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686-1523395182119-icon0104-oozie%3Alauncher%3AT%3Dhive2%3AW%3Dswamp_teradata_consolidat-1523395204432-1-0-SUCCEEDED-root.users.icon0104-1523395190329.jhist_tmp 2018-04-10 21:20:04,904 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hanameservice:8020/user/icon0104/.staging/job_1523145714907_1686/job_1523145714907_1686_1_conf.xml to hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686_conf.xml_tmp 2018-04-10 21:20:05,002 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686_conf.xml_tmp 2018-04-10 21:20:05,023 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686.summary_tmp to hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686.summary 2018-04-10 21:20:05,037 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686_conf.xml_tmp to hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686_conf.xml 2018-04-10 21:20:05,047 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686-1523395182119-icon0104-oozie%3Alauncher%3AT%3Dhive2%3AW%3Dswamp_teradata_consolidat-1523395204432-1-0-SUCCEEDED-root.users.icon0104-1523395190329.jhist_tmp to hdfs://hanameservice:8020/user/history/done_intermediate/icon0104/job_1523145714907_1686-1523395182119-icon0104-oozie%3Alauncher%3AT%3Dhive2%3AW%3Dswamp_teradata_consolidat-1523395204432-1-0-SUCCEEDED-root.users.icon0104-1523395190329.jhist 2018-04-10 21:20:05,047 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop() 2018-04-10 21:20:05,048 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1523145714907_1686_m_000000_0 2018-04-10 21:20:05,065 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1523145714907_1686_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2018-04-10 21:20:05,066 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to 2018-04-10 21:20:05,066 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is https://trmas-fc2d552a.azcloud.local:19890/jobhistory/job/job_1523145714907_1686 2018-04-10 21:20:05,073 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered. 2018-04-10 21:20:06,076 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2018-04-10 21:20:06,077 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://trmas-c9471d78.azcloud.local:8020 /user/icon0104/.staging/job_1523145714907_1686 2018-04-10 21:20:06,091 INFO [Thread-70] org.apache.hadoop.ipc.Server: Stopping server on 46033 2018-04-10 21:20:06,095 INFO [IPC Server listener on 46033] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 46033 2018-04-10 21:20:06,095 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2018-04-10 21:20:06,096 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted 2018-04-10 21:20:06,098 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted 2018-04-10 21:20:06,099 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job end notification started for jobID : job_1523145714907_1686 2018-04-10 21:20:06,101 INFO [Thread-70] org.mortbay.log: Job end notification attempts left 0 2018-04-10 21:20:06,101 INFO [Thread-70] org.mortbay.log: Job end notification trying http://trmas-fc2d552a.azcloud.local:11000/oozie/callback?id=0000622-180408000321809-oozie-oozi-W@DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE_ACTION&status=SUCCEEDED 2018-04-10 21:20:06,105 INFO [Thread-70] org.mortbay.log: Job end notification to http://trmas-fc2d552a.azcloud.local:11000/oozie/callback?id=0000622-180408000321809-oozie-oozi-W@DLT01V_VPNRELCTKT_CONSOLIDATE_TABLE_ACTION&status=SUCCEEDED succeeded 2018-04-10 21:20:06,105 INFO [Thread-70] org.mortbay.log: Job end notification succeeded for job_1523145714907_1686 2018-04-10 21:20:11,105 INFO [Thread-70] org.apache.hadoop.ipc.Server: Stopping server on 43275 2018-04-10 21:20:11,106 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2018-04-10 21:20:11,106 INFO [IPC Server listener on 43275] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 43275 2018-04-10 21:20:11,108 INFO [Thread-70] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0 LogType:container-localizer-syslog Log Upload Time:Tue Apr 10 21:20:12 +0000 2018 LogLength:0 Log Contents:

I've tried to run the content of the .hql script directly in Hue to check if there was a problem with the Hive-SQL syntax (of course I have substituted the parameters before) but it works without error.

How can I debug the problem in this case? Is there any other log that I should check?

 

Thanks for any help

 

 

Announcements