Reply
Explorer
Posts: 17
Registered: ‎10-23-2017

Hue user cannot load data into Hive via Sqoop 1

[ Edited ]

Hi All,

 

I am trying to load data from SQL Server into Hive in Hue using Sqoop 1. The following statement is working:

 

import --connect jdbc:sqlserver://xxx:1433;database=INFA_SOURCE --username infa_source --password B1source --table Personen -m 1

However when I add --hive-import it's not working any more. There are two jobs running and both have status succeeded, but workflow is in status killed. In the log file of first job I can see that there is a failure at the end, but it's not specified what exactly failed:

 

2018-01-09 10:59:10,544 [main] INFO  org.apache.sqoop.manager.SqlManager  - Executing SQL statement: SELECT t.* FROM [Personen] AS t WHERE 1=0
2018-01-09 10:59:10,569 [main] INFO  org.apache.sqoop.hive.HiveImport  - Loading uploaded data into Hive

<<< Invocation of Sqoop command completed <<<

Hadoop Job IDs executed by Sqoop: job_1515423392261_0009

Intercepting System.exit(1)

<<< Invocation of Main class completed <<<

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]

Oozie Launcher failed, finishing Hadoop job gracefully

As the same statement works via command line, I think the issue is with Hue user that doesn't have access to /user/hive/warehouse folder, because of a sticky bit.

 

I found information about enabling security with Sentry, but I am not sure if that's really the issue and if this could be a solution?.

 

I have mainly two questions:

- Can I remove sticky bit to test the --hive-import with no issues and then bring it back?

- If the issue is with priviledges, is Sentry a possible solution?

 

Please let me know if you see here another problem or how else I could check it :)

 

Thanks

Anna

Explorer
Posts: 17
Registered: ‎10-23-2017

Re: Hue user cannot load data into Hive via Sqoop 1

Instead of deleting sticky bit from /user/hive/warehouse and enabling Sentry I just created a hive user in hue which seems to work for a permission. Unfortunately I still cannot load data into hive in one statement (import from sql server and add --hive-import).

 

There are two jobs running for it. First job's default log file:

 

2018-01-10 17:46:13,651 [main] INFO  org.apache.sqoop.mapreduce.ImportJobBase  - Transferred 71 bytes in 25.0372 seconds (2.8358 bytes/sec)
2018-01-10 17:46:13,668 [main] INFO  org.apache.sqoop.mapreduce.ImportJobBase  - Retrieved 3 records.
2018-01-10 17:46:13,715 [main] INFO  org.apache.sqoop.manager.SqlManager  - Executing SQL statement: SELECT t.* FROM [Personen] AS t WHERE 1=0
2018-01-10 17:46:13,735 [main] INFO  org.apache.sqoop.hive.HiveImport  - Loading uploaded data into Hive
Heart beat

<<< Invocation of Sqoop command completed <<<

Hadoop Job IDs executed by Sqoop: job_1515581250288_0004

Intercepting System.exit(1)

<<< Invocation of Main class completed <<<

Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]

Oozie Launcher failed, finishing Hadoop job gracefully

Oozie Launcher, uploading action data to HDFS sequence file: hdfs://hadoop:8020/user/hive/oozie-oozi/0000001-180110114954358-oozie-oozi-W/sqoop-763b--sqoop/action-data.seq
Successfully reset security manager from org.apache.oozie.action.hadoop.LauncherSecurityManager@15871 to null

Oozie Launcher ends

stderr log file:

 

WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.
Jan 10, 2018 5:45:35 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
Jan 10, 2018 5:45:35 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Jan 10, 2018 5:45:35 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
Jan 10, 2018 5:45:35 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Jan 10, 2018 5:45:35 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
Jan 10, 2018 5:45:36 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
Jan 10, 2018 5:45:36 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get
WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.
Jan 10, 2018 5:45:37 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Second job's default log file is empty and stderr shows the same warning.

 

Does anyone know what's wrong?

 

Thanks

Anna

Champion
Posts: 617
Registered: ‎05-16-2016

Re: Hue user cannot load data into Hive via Sqoop 1

Anna 

 

is impersonation enabled in your cluster  / hue ? 

 

 

https://www.cloudera.com/documentation/enterprise/5-8-x/topics/admin_hdfs_proxy_users.html

 

if you have sentry in your cluster , make sure you give the grant and right priviliges to those table.database to the user that it hitting . 

 

 

https://www.cloudera.com/documentation/enterprise/5-5-x/topics/sg_hive_sql.html

 

meantime you can also look in to the resourcemanager webui for more details as to why it got killed and their logs . 

Explorer
Posts: 17
Registered: ‎10-23-2017

Re: Hue user cannot load data into Hive via Sqoop 1

Hi csguna,

 

Thank you for your answer!

 

I do not have sentry enabled in my cluster and I tried now to set up impersonation, but it looks like I already made it by creating a hive user in hue and running jobs by this user. I already found in my core-site.xml the following properties:

 

 

  <property>
    <name>hadoop.proxyuser.hive.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.hive.groups</name>
    <value>*</value>
  </property>

So I think this should be enough?

 

 

I checked also logs in ResourceManager, the local logs have this information (

hadoop-cmf-yarn-JOBHISTORY-hadoop.log.out):

 

 

2018-01-11 13:17:15,689 INFO org.apache.hadoop.mapreduce.jobhistory.JobSummary: jobId=job_1515668016860_0003,submitTime=1515673012029,launchTime=1515673021112,firstMapTaskLaunchTime=1515673024309,firstReduceTaskLaunchTime=0,finishTime=1515673031360,resourcesPerMap=4096,resourcesPerReduce=0,numMaps=1,numReduces=0,user=hive,queue=default,status=SUCCEEDED,mapSlotSeconds=27,reduceSlotSeconds=0,jobName=oozie:action:T\=sqoop:W\=Batch job for query-sqoop1:A\=sqoop-974b:ID\=0000001-180111115500387-oozie-oozi-W
2018-01-11 13:17:15,689 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Deleting JobSummary file: [hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0003.summary]
2018-01-11 13:17:15,711 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0003-1515673012029-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515673031360-1-0-SUCCEEDED-root.users.hive-1515673021112.jhist to hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0003-1515673012029-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515673031360-1-0-SUCCEEDED-root.users.hive-1515673021112.jhist
2018-01-11 13:17:15,718 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0003_conf.xml to hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0003_conf.xml
2018-01-11 13:17:15,733 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: Loading job: job_1515668016860_0003 from file: hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0003-1515673012029-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515673031360-1-0-SUCCEEDED-root.users.hive-1515673021112.jhist
2018-01-11 13:17:15,734 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: Loading history file: [hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0003-1515673012029-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515673031360-1-0-SUCCEEDED-root.users.hive-1515673021112.jhist]
2018-01-11 13:17:19,360 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: Loading job: job_1515668016860_0002 from file: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0002-1515672984627-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515673038642-1-0-SUCCEEDED-root.users.hive-1515672993654.jhist
2018-01-11 13:17:19,361 INFO org.apache.hadoop.mapreduce.v2.hs.CompletedJob: Loading history file: [hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0002-1515672984627-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515673038642-1-0-SUCCEEDED-root.users.hive-1515672993654.jhist]
2018-01-11 13:17:19,445 INFO org.apache.hadoop.mapreduce.jobhistory.JobSummary: jobId=job_1515668016860_0002,submitTime=1515672984627,launchTime=1515672993654,firstMapTaskLaunchTime=1515672998068,firstReduceTaskLaunchTime=0,finishTime=1515673038642,resourcesPerMap=4096,resourcesPerReduce=0,numMaps=1,numReduces=0,user=hive,queue=default,status=SUCCEEDED,mapSlotSeconds=161,reduceSlotSeconds=0,jobName=oozie:launcher:T\=sqoop:W\=Batch job for query-sqoop1:A\=sqoop-974b:ID\=0000001-180111115500387-oozie-oozi-W
2018-01-11 13:17:19,445 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Deleting JobSummary file: [hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0002.summary]
2018-01-11 13:17:19,453 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0002-1515672984627-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515673038642-1-0-SUCCEEDED-root.users.hive-1515672993654.jhist to hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0002-1515672984627-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515673038642-1-0-SUCCEEDED-root.users.hive-1515672993654.jhist
2018-01-11 13:17:19,458 INFO org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Moving hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0002_conf.xml to hdfs://hadoop:8020/user/history/done/2018/01/11/000000/job_1515668016860_0002_conf.xml
2018-01-11 13:17:42,673 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: Starting scan to move intermediate done files
2018-01-11 13:18:58,430 INFO org.apache.hadoop.yarn.webapp.View: Getting list of all Jobs.
2018-01-11 13:20:20,644 INFO logs: Aliases are enabled
2018-01-11 13:20:42,674 INFO org.apache.hadoop.mapreduce.v2.hs.JobHistory: Starting scan to move intermediate done files

 

It's only INFO log files.

 

There are two jobs running for that statement. Log files of a first job from resource manager:

 

 

2018-01-11 13:35:56,139 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1515668016860_0005_000001
2018-01-11 13:35:57,129 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2018-01-11 13:35:57,129 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@30a945f8)
2018-01-11 13:35:57,428 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, Service: 192.168.10.17:8032, Ident: (RM_DELEGATION_TOKEN owner=hive, renewer=oozie mr token, realUser=oozie, issueDate=1515674151920, maxDate=1516278951920, sequenceNumber=15, masterKeyId=2)
2018-01-11 13:35:57,458 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-01-11 13:35:57,461 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-01-11 13:35:58,345 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-01-11 13:35:58,561 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2018-01-11 13:35:58,562 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2018-01-11 13:35:58,563 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2018-01-11 13:35:58,564 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2018-01-11 13:35:58,564 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2018-01-11 13:35:58,566 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2018-01-11 13:35:58,566 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2018-01-11 13:35:58,567 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2018-01-11 13:35:58,633 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:35:58,670 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:35:58,704 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:35:58,727 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2018-01-11 13:35:58,788 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2018-01-11 13:35:59,100 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-01-11 13:35:59,207 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-01-11 13:35:59,207 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2018-01-11 13:35:59,227 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1515668016860_0005 to jobTokenSecretManager
2018-01-11 13:35:59,417 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1515668016860_0005 because: not enabled; too much RAM;
2018-01-11 13:35:59,450 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1515668016860_0005 = 0. Number of splits = 1
2018-01-11 13:35:59,451 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1515668016860_0005 = 0
2018-01-11 13:35:59,451 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0005Job Transitioned from NEW to INITED
2018-01-11 13:35:59,453 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1515668016860_0005.
2018-01-11 13:35:59,491 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
2018-01-11 13:35:59,503 INFO [Socket Reader #1 for port 35567] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35567
2018-01-11 13:35:59,563 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2018-01-11 13:35:59,563 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 13:35:59,564 INFO [IPC Server listener on 35567] org.apache.hadoop.ipc.Server: IPC Server listener on 35567: starting
2018-01-11 13:35:59,566 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at hadoop/192.168.10.17:35567
2018-01-11 13:35:59,675 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-01-11 13:35:59,689 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2018-01-11 13:35:59,697 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2018-01-11 13:35:59,715 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-01-11 13:35:59,733 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2018-01-11 13:35:59,733 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2018-01-11 13:35:59,739 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2018-01-11 13:35:59,740 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-01-11 13:35:59,756 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 36294
2018-01-11 13:35:59,756 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
2018-01-11 13:35:59,817 INFO [main] org.mortbay.log: Extract jar:file:/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/jars/hadoop-yarn-common-2.6.0-cdh5.13.0.jar!/webapps/mapreduce to ./tmp/Jetty_0_0_0_0_36294_mapreduce____h4wdpr/webapp
2018-01-11 13:36:00,477 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:36294
2018-01-11 13:36:00,478 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 36294
2018-01-11 13:36:00,936 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-01-11 13:36:00,942 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2018-01-11 13:36:00,943 INFO [Socket Reader #1 for port 46509] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 46509
2018-01-11 13:36:00,955 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 13:36:00,956 INFO [IPC Server listener on 46509] org.apache.hadoop.ipc.Server: IPC Server listener on 46509: starting
2018-01-11 13:36:00,981 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2018-01-11 13:36:00,981 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2018-01-11 13:36:00,981 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2018-01-11 13:36:01,080 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at hadoop/192.168.10.17:8030
2018-01-11 13:36:01,168 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:15360, vCores:4>
2018-01-11 13:36:01,169 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.users.hive
2018-01-11 13:36:01,174 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2018-01-11 13:36:01,174 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2018-01-11 13:36:01,185 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0005Job Transitioned from INITED to SETUP
2018-01-11 13:36:01,187 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2018-01-11 13:36:01,190 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0005Job Transitioned from SETUP to RUNNING
2018-01-11 13:36:01,249 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0005_m_000000 Task Transitioned from NEW to SCHEDULED
2018-01-11 13:36:01,254 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0005_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2018-01-11 13:36:01,260 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:4096, vCores:1>
2018-01-11 13:36:01,328 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1515668016860_0005, File: hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0005/job_1515668016860_0005_1.jhist
2018-01-11 13:36:02,172 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:02,173 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:36:02,237 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515668016860_0005: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:12288, vCores:3> knownNMs=1
2018-01-11 13:36:03,252 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2018-01-11 13:36:03,291 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1515668016860_0005_01_000002 to attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:03,292 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:03,358 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources.
2018-01-11 13:36:03,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/hive/.staging/job_1515668016860_0005/job.xml
2018-01-11 13:36:03,657 WARN [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.util.MRApps: cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/sqoop/sqljdbc4.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/oozie/sqljdbc4.jar This will be an error in Hadoop 2.0
2018-01-11 13:36:03,657 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #1 tokens and #1 secret keys for NM use for launching container
2018-01-11 13:36:03,657 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 2
2018-01-11 13:36:03,658 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2018-01-11 13:36:04,188 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0005_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2018-01-11 13:36:04,203 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1515668016860_0005_01_000002 taskAttempt attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:04,206 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:04,297 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515668016860_0005: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:8192, vCores:2> knownNMs=1
2018-01-11 13:36:04,329 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1515668016860_0005_m_000000_0 : 13562
2018-01-11 13:36:04,331 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1515668016860_0005_m_000000_0] using containerId: [container_1515668016860_0005_01_000002 on NM: [hadoop:8041]
2018-01-11 13:36:04,337 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0005_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2018-01-11 13:36:04,338 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0005_m_000000 Task Transitioned from SCHEDULED to RUNNING
2018-01-11 13:36:07,867 INFO [Socket Reader #1 for port 46509] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1515668016860_0005 (auth:SIMPLE)
2018-01-11 13:36:07,894 INFO [IPC Server handler 0 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1515668016860_0005_m_000002 asked for a task
2018-01-11 13:36:07,895 INFO [IPC Server handler 0 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1515668016860_0005_m_000002 given task: attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:21,913 INFO [IPC Server handler 1 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515668016860_0005_m_000000_0 is : 1.0
2018-01-11 13:36:42,601 INFO [IPC Server handler 0 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515668016860_0005_m_000000_0 is : 1.0
2018-01-11 13:36:42,808 INFO [IPC Server handler 1 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515668016860_0005_m_000000_0 is : 1.0
2018-01-11 13:36:42,813 INFO [IPC Server handler 3 on 46509] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:42,817 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0005_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER
2018-01-11 13:36:42,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:42,829 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0005_m_000000 Task Transitioned from RUNNING to SUCCEEDED
2018-01-11 13:36:42,832 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2018-01-11 13:36:42,832 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0005Job Transitioned from RUNNING to COMMITTING
2018-01-11 13:36:42,833 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT
2018-01-11 13:36:42,879 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 
2018-01-11 13:36:42,880 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0005Job Transitioned from COMMITTING to SUCCEEDED
2018-01-11 13:36:42,880 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2018-01-11 13:36:42,881 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2018-01-11 13:36:42,881 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2018-01-11 13:36:42,881 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2018-01-11 13:36:42,881 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2018-01-11 13:36:42,881 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2018-01-11 13:36:42,882 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 1
2018-01-11 13:36:42,884 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_FINISHED
2018-01-11 13:36:42,940 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0005/job_1515668016860_0005_1.jhist to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005-1515674153126-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515674202876-1-0-SUCCEEDED-root.users.hive-1515674161178.jhist_tmp
2018-01-11 13:36:42,976 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005-1515674153126-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515674202876-1-0-SUCCEEDED-root.users.hive-1515674161178.jhist_tmp
2018-01-11 13:36:42,982 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0005/job_1515668016860_0005_1_conf.xml to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005_conf.xml_tmp
2018-01-11 13:36:43,026 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005_conf.xml_tmp
2018-01-11 13:36:43,040 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005.summary_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005.summary
2018-01-11 13:36:43,045 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005_conf.xml_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005_conf.xml
2018-01-11 13:36:43,049 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005-1515674153126-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515674202876-1-0-SUCCEEDED-root.users.hive-1515674161178.jhist_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0005-1515674153126-hive-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop-1515674202876-1-0-SUCCEEDED-root.users.hive-1515674161178.jhist
2018-01-11 13:36:43,049 INFO [Thread-90] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2018-01-11 13:36:43,050 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1515668016860_0005_m_000000_0
2018-01-11 13:36:43,076 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0005_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2018-01-11 13:36:43,078 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to 
2018-01-11 13:36:43,079 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://hadoop:19888/jobhistory/job/job_1515668016860_0005
2018-01-11 13:36:43,088 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2018-01-11 13:36:44,090 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:44,095 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://hadoop:8020 /user/hive/.staging/job_1515668016860_0005
2018-01-11 13:36:44,107 INFO [Thread-90] org.apache.hadoop.ipc.Server: Stopping server on 46509
2018-01-11 13:36:44,110 INFO [IPC Server listener on 46509] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 46509
2018-01-11 13:36:44,111 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 13:36:44,113 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2018-01-11 13:36:44,115 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted
2018-01-11 13:36:44,115 INFO [Thread-90] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job end notification started for jobID : job_1515668016860_0005
2018-01-11 13:36:44,117 INFO [Thread-90] org.mortbay.log: Job end notification attempts left 0
2018-01-11 13:36:44,117 INFO [Thread-90] org.mortbay.log: Job end notification trying http://hadoop:11000/oozie/callback?id=0000003-180111115500387-oozie-oozi-W@sqoop-456e&status=SUCCEEDED
2018-01-11 13:36:44,137 INFO [Thread-90] org.mortbay.log: Job end notification to http://hadoop:11000/oozie/callback?id=0000003-180111115500387-oozie-oozi-W@sqoop-456e&status=SUCCEEDED succeeded
2018-01-11 13:36:44,138 INFO [Thread-90] org.mortbay.log: Job end notification succeeded for job_1515668016860_0005
2018-01-11 13:36:49,138 INFO [Thread-90] org.apache.hadoop.ipc.Server: Stopping server on 35567
2018-01-11 13:36:49,139 INFO [IPC Server listener on 35567] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 35567
2018-01-11 13:36:49,139 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 13:36:49,152 INFO [Thread-90] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0

 

 

Both jobs succeeds, but workflow failes. Data are loded into HDFS, because i can see them in user folder: /user/hive/Personen, but not getting into the warehouse. The same operation works, when I run it from the console.

 

In local hive logs hadoop-cmf-hive-HIVESERVER2-hadoop.log.out I can see an error

Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

but it's an old one (2 hours), after that I restarted whole cluster and from newest run it didnt appear.Whole error:

 

2018-01-11 11:55:03,342 WARN  org.apache.hive.service.server.HiveServer2: [main]: Error starting HiveServer2 on attempt 1, will retry in 60000ms
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:833)
    at org.apache.hadoop.hive.ql.session.SessionState.getAuthorizationMode(SessionState.java:1679)
    at org.apache.hadoop.hive.ql.session.SessionState.isAuthorizationModeV2(SessionState.java:1690)
    at org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1738)
    at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:125)
    at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
    at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
    at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:125)
    at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:542)
    at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:89)
    at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:793)
    at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:666)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:391)
    at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:810)
    ... 17 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProviderBase.setConf(HiveAuthorizationProviderBase.java:114)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:388)
    ... 18 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:220)
    at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:338)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:299)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:274)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:256)
    at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.init(DefaultHiveAuthorizationProvider.java:29)
    at org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProviderBase.setConf(HiveAuthorizationProviderBase.java:112)
    ... 21 more
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1562)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3399)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3643)
    at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:231)
    at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:215)
    ... 27 more
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1560)
    ... 34 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
    at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:464)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:244)
    at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1560)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
    at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3399)
    at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3643)
    at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:231)
    at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:215)
    at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:338)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:299)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:274)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:256)
    at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.init(DefaultHiveAuthorizationProvider.java:29)
    at org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProviderBase.setConf(HiveAuthorizationProviderBase.java:112)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:388)
    at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:810)
    at org.apache.hadoop.hive.ql.session.SessionState.getAuthorizationMode(SessionState.java:1679)
    at org.apache.hadoop.hive.ql.session.SessionState.isAuthorizationModeV2(SessionState.java:1690)
    at org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1738)
    at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:125)
    at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
    at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
    at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:125)
    at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:542)
    at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:89)
    at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:793)
    at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:666)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
    ... 42 more
)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:512)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:244)
    at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
    ... 39 more

 

Can this be a memory issue?

 

I have set:

- Java Heap Size of DataNode in Bytes: 824 MiB

 

- Java Heap Size of NameNode in Bytes: 4 GiB
 
- yarn.app.mapreduce.am.resource.mb: 3 GiB
 
- yarn.scheduler.maximum-allocation-mb: 64 GiB (I made here default)
 
- yarn.scheduler.maximum-allocation-vcores: 4
 
- mapreduce.map.memory.mb: 4 GiB
 
- mapreduce.reduce.memory.mb: 4 GiB
 
 
Sorry for a long message, but I realized that posting all log files could be beneficial, as someone can see there more then me :)
 
I am stuck with it and do not know where to look at. Can you please advice?
 
Thanks
Anna
 
 
Explorer
Posts: 17
Registered: ‎10-23-2017

Re: Hue user cannot load data into Hive via Sqoop 1

I am posting also log files from the second job gathered from resource manager:

 

2018-01-11 13:36:21,388 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1515668016860_0006_000001
2018-01-11 13:36:22,510 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2018-01-11 13:36:22,511 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@3930b5bd)
2018-01-11 13:36:22,794 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: mapreduce.job, Service: job_1515668016860_0005, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@4f4f5969)
2018-01-11 13:36:22,795 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, Service: 192.168.10.17:8032, Ident: (RM_DELEGATION_TOKEN owner=hive, renewer=oozie mr token, realUser=oozie, issueDate=1515674151920, maxDate=1516278951920, sequenceNumber=15, masterKeyId=2)
2018-01-11 13:36:22,823 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2018-01-11 13:36:22,825 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
2018-01-11 13:36:22,910 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2018-01-11 13:36:22,910 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2018-01-11 13:36:23,845 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-01-11 13:36:24,034 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2018-01-11 13:36:24,036 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2018-01-11 13:36:24,037 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2018-01-11 13:36:24,038 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2018-01-11 13:36:24,038 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2018-01-11 13:36:24,039 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2018-01-11 13:36:24,040 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2018-01-11 13:36:24,041 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2018-01-11 13:36:24,104 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:36:24,146 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:36:24,250 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:36:24,273 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2018-01-11 13:36:24,336 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2018-01-11 13:36:24,612 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-01-11 13:36:24,700 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-01-11 13:36:24,700 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2018-01-11 13:36:24,715 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1515668016860_0006 to jobTokenSecretManager
2018-01-11 13:36:24,960 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1515668016860_0006 because: not enabled; too much RAM;
2018-01-11 13:36:24,980 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1515668016860_0006 = 0. Number of splits = 1
2018-01-11 13:36:24,981 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1515668016860_0006 = 0
2018-01-11 13:36:24,981 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0006Job Transitioned from NEW to INITED
2018-01-11 13:36:24,982 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1515668016860_0006.
2018-01-11 13:36:25,019 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
2018-01-11 13:36:25,031 INFO [Socket Reader #1 for port 35640] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35640
2018-01-11 13:36:25,079 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2018-01-11 13:36:25,080 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 13:36:25,080 INFO [IPC Server listener on 35640] org.apache.hadoop.ipc.Server: IPC Server listener on 35640: starting
2018-01-11 13:36:25,081 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at hadoop/192.168.10.17:35640
2018-01-11 13:36:25,164 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-01-11 13:36:25,176 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2018-01-11 13:36:25,185 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2018-01-11 13:36:25,198 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-01-11 13:36:25,209 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2018-01-11 13:36:25,209 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2018-01-11 13:36:25,215 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2018-01-11 13:36:25,215 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-01-11 13:36:25,232 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 42678
2018-01-11 13:36:25,232 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
2018-01-11 13:36:25,284 INFO [main] org.mortbay.log: Extract jar:file:/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/jars/hadoop-yarn-common-2.6.0-cdh5.13.0.jar!/webapps/mapreduce to ./tmp/Jetty_0_0_0_0_42678_mapreduce____74rm68/webapp
2018-01-11 13:36:26,030 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:42678
2018-01-11 13:36:26,031 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 42678
2018-01-11 13:36:26,449 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-01-11 13:36:26,456 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2018-01-11 13:36:26,457 INFO [Socket Reader #1 for port 34790] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 34790
2018-01-11 13:36:26,473 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 13:36:26,492 INFO [IPC Server listener on 34790] org.apache.hadoop.ipc.Server: IPC Server listener on 34790: starting
2018-01-11 13:36:26,523 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2018-01-11 13:36:26,523 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2018-01-11 13:36:26,523 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2018-01-11 13:36:26,635 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at hadoop/192.168.10.17:8030
2018-01-11 13:36:26,738 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:15360, vCores:4>
2018-01-11 13:36:26,738 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.users.hive
2018-01-11 13:36:26,743 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2018-01-11 13:36:26,743 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2018-01-11 13:36:26,797 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0006Job Transitioned from INITED to SETUP
2018-01-11 13:36:26,799 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2018-01-11 13:36:26,823 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0006Job Transitioned from SETUP to RUNNING
2018-01-11 13:36:26,855 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0006_m_000000 Task Transitioned from NEW to SCHEDULED
2018-01-11 13:36:26,858 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2018-01-11 13:36:26,860 INFO [Thread-53] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:4096, vCores:1>
2018-01-11 13:36:26,931 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1515668016860_0006, File: hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/job_1515668016860_0006_1.jhist
2018-01-11 13:36:27,527 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hadoop:8020]
2018-01-11 13:36:27,742 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:27,846 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515668016860_0006: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:5120, vCores:1> knownNMs=1
2018-01-11 13:36:28,860 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2018-01-11 13:36:28,894 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1515668016860_0006_01_000002 to attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:28,896 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:28,995 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar file on the remote FS is hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/job.jar
2018-01-11 13:36:29,001 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/hive/.staging/job_1515668016860_0006/job.xml
2018-01-11 13:36:29,293 WARN [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.util.MRApps: cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/sqoop/sqljdbc4.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/oozie/sqljdbc4.jar This will be an error in Hadoop 2.0
2018-01-11 13:36:29,295 WARN [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.util.MRApps: cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/sqoop/sqoop.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/libjars/sqoop.jar This will be an error in Hadoop 2.0
2018-01-11 13:36:29,298 WARN [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.util.MRApps: cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/oozie/share/lib/lib_20171122100655/sqoop/sqljdbc4.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/libjars/sqljdbc4.jar This will be an error in Hadoop 2.0
2018-01-11 13:36:29,298 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #2 tokens and #2 secret keys for NM use for launching container
2018-01-11 13:36:29,298 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 2
2018-01-11 13:36:29,299 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2018-01-11 13:36:29,920 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2018-01-11 13:36:29,925 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515668016860_0006: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:1024, vCores:0> knownNMs=1
2018-01-11 13:36:29,929 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1515668016860_0006_01_000002 taskAttempt attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:29,932 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:30,051 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1515668016860_0006_m_000000_0 : 13562
2018-01-11 13:36:30,054 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1515668016860_0006_m_000000_0] using containerId: [container_1515668016860_0006_01_000002 on NM: [hadoop:8041]
2018-01-11 13:36:30,059 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2018-01-11 13:36:30,063 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0006_m_000000 Task Transitioned from SCHEDULED to RUNNING
2018-01-11 13:36:33,937 INFO [Socket Reader #1 for port 34790] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1515668016860_0006 (auth:SIMPLE)
2018-01-11 13:36:33,964 INFO [IPC Server handler 0 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1515668016860_0006_m_000002 asked for a task
2018-01-11 13:36:33,964 INFO [IPC Server handler 0 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1515668016860_0006_m_000002 given task: attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:36,912 INFO [IPC Server handler 0 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515668016860_0006_m_000000_0 is : 0.0
2018-01-11 13:36:37,164 INFO [IPC Server handler 1 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state update from attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:37,165 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from RUNNING to COMMIT_PENDING
2018-01-11 13:36:37,166 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: attempt_1515668016860_0006_m_000000_0 given a go for committing the task output.
2018-01-11 13:36:37,167 INFO [IPC Server handler 2 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go request from attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:37,168 INFO [IPC Server handler 2 on 34790] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of canCommit for attempt_1515668016860_0006_m_000000_0:true
2018-01-11 13:36:37,254 INFO [IPC Server handler 3 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515668016860_0006_m_000000_0 is : 1.0
2018-01-11 13:36:37,263 INFO [IPC Server handler 4 on 34790] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:37,266 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from COMMIT_PENDING to SUCCESS_FINISHING_CONTAINER
2018-01-11 13:36:37,273 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:37,274 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515668016860_0006_m_000000 Task Transitioned from RUNNING to SUCCEEDED
2018-01-11 13:36:37,277 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2018-01-11 13:36:37,278 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0006Job Transitioned from RUNNING to COMMITTING
2018-01-11 13:36:37,279 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT
2018-01-11 13:36:37,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 
2018-01-11 13:36:37,379 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515668016860_0006Job Transitioned from COMMITTING to SUCCEEDED
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2018-01-11 13:36:37,380 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2018-01-11 13:36:37,381 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0
2018-01-11 13:36:37,442 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/job_1515668016860_0006_1.jhist to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006-1515674177493-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515674197376-1-0-SUCCEEDED-root.users.hive-1515674186786.jhist_tmp
2018-01-11 13:36:37,496 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006-1515674177493-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515674197376-1-0-SUCCEEDED-root.users.hive-1515674186786.jhist_tmp
2018-01-11 13:36:37,503 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hadoop:8020/user/hive/.staging/job_1515668016860_0006/job_1515668016860_0006_1_conf.xml to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006_conf.xml_tmp
2018-01-11 13:36:37,539 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006_conf.xml_tmp
2018-01-11 13:36:37,547 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006.summary_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006.summary
2018-01-11 13:36:37,551 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006_conf.xml_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006_conf.xml
2018-01-11 13:36:37,555 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006-1515674177493-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515674197376-1-0-SUCCEEDED-root.users.hive-1515674186786.jhist_tmp to hdfs://hadoop:8020/user/history/done_intermediate/hive/job_1515668016860_0006-1515674177493-hive-oozie%3Aaction%3AT%3Dsqoop%3AW%3DBatch+job+for+query%2Dsqoop1%3A-1515674197376-1-0-SUCCEEDED-root.users.hive-1515674186786.jhist
2018-01-11 13:36:37,556 INFO [Thread-70] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2018-01-11 13:36:37,557 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1515668016860_0006_m_000000_0
2018-01-11 13:36:37,583 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515668016860_0006_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2018-01-11 13:36:37,585 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to 
2018-01-11 13:36:37,585 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://hadoop:19888/jobhistory/job/job_1515668016860_0006
2018-01-11 13:36:37,598 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2018-01-11 13:36:38,600 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 13:36:38,607 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://hadoop:8020 /user/hive/.staging/job_1515668016860_0006
2018-01-11 13:36:38,614 INFO [Thread-70] org.apache.hadoop.ipc.Server: Stopping server on 34790
2018-01-11 13:36:38,620 INFO [IPC Server listener on 34790] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 34790
2018-01-11 13:36:38,620 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 13:36:38,624 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2018-01-11 13:36:38,626 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted
2018-01-11 13:36:38,626 INFO [Thread-70] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job end notification started for jobID : job_1515668016860_0006
2018-01-11 13:36:38,629 INFO [Thread-70] org.mortbay.log: Job end notification attempts left 0
2018-01-11 13:36:38,629 INFO [Thread-70] org.mortbay.log: Job end notification trying http://hadoop:11000/oozie/callback?id=0000003-180111115500387-oozie-oozi-W@sqoop-456e&status=SUCCEEDED
2018-01-11 13:36:38,641 INFO [Thread-70] org.mortbay.log: Job end notification to http://hadoop:11000/oozie/callback?id=0000003-180111115500387-oozie-oozi-W@sqoop-456e&status=SUCCEEDED succeeded
2018-01-11 13:36:38,641 INFO [Thread-70] org.mortbay.log: Job end notification succeeded for job_1515668016860_0006
2018-01-11 13:36:43,642 INFO [Thread-70] org.apache.hadoop.ipc.Server: Stopping server on 35640
2018-01-11 13:36:43,642 INFO [IPC Server listener on 35640] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 35640
2018-01-11 13:36:43,642 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 13:36:43,655 INFO [Thread-70] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0
Announcements