Member since
07-05-2018
119
Posts
3
Kudos Received
0
Solutions
05-04-2020
02:27 AM
Hello @Sergiete , thank you for raising our attention that there was an improvement point in Cloudera CDP documentation. We have added clarification in our CDP documentation about "Installing Hive on Tez" [1] and that the Spark execution engine is not supported, as it's been replaced by Tez [2]. Based on our updated documentation [1] the correct order of installing the Hive service is: Install the Hive service, designated Hive on Tez in CDP. HiveServer is installed automatically during this process. Install HMS, which is designated Hive. Best regards: Ferenc [1] https://docs.cloudera.com/runtime/7.0.3/hive-introduction/topics/hive_installing_on_tez.html [2] https://docs.cloudera.com/runtime/7.0.3/hive-introduction/topics/hive-unsupported.html
... View more
12-16-2019
01:01 PM
1 Kudo
@VijayM, It seems when I posted, my smiley after "that was quite a gap in our conversation" disappeared. I wanted to be sure you knew it was supposed to be there 🙂
... View more
09-25-2019
03:44 AM
Hi Vijay, Sorry, haven't able to nail down the cause yet, but can you collect EXPLAIN EXTENDED of the query and share the output as attachment to the post? EXPLAIN EXTENDED select * from tdb1.t1 where bs1_dt=2017-06-23; EXPLAIN EXTENDED select * from tdb1.t1; I would like to check how HS2 does the query plan and see if there is any clue. Cheers Eric
... View more
09-01-2019
05:50 PM
@VijayM , Couple of questions: 1. Are CDH6 and CDH5 managed by same Cloudera Manager, or you manage yourself? 2. From the setting you applied below: spark.yarn.jars=local:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/spark/jars/*,local:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/spark/hive/*,local:/app/bds/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/spark/lib/* It looks like you have both CDH6.2 and CDH5.16 running on the same host, is that right? Any reason you want to do so? As @JosiahGoodson mentioned, spark2 and spark1 are not compatible, you should either have Spark1, or Spark2 jars in the classpath, not both, otherwise they will conflict. Cheers Eric
... View more
07-30-2019
09:49 AM
Yes Spark2-shell and spark2-submit has these issues as well. Any insights
... View more
07-15-2019
05:44 AM
@bgooley Cloudera Navigator Metastore needs TLS password to be set in CM UI otherwise it wont start. Thanks for the help to fix the issue. - Vijay M
... View more
06-13-2019
04:07 AM
@bgooley I need your help once again. This is for Hue connecting to hiveservers2 instances using haproxy load balancer. Below is haproxy configuration for hiveserver2 to connect using hue, Kindly revert if any mistake in below configuration. frontend hivejdbc_front bind *:10003 mode tcp option tcplog timeout client 720m timeout server 720m default_backend hive-hue #--------------------------------------------------------------------- # source balancing between the various backends #--------------------------------------------------------------------- backend hive-hue balance source mode tcp server hs2_1 a301-9941-0809.ldn.swissbank.com:10001 check server hs2_2 a301-9941-1309.ldn.swissbank.com:10001 check - Update hue config property Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini from CM UI and added below into it, Kindly confirm if its not correct. [beeswax] hive_server_host=a301-9941-0727.ldn.swissbank.com hive_server_port=10003 Also help me with haproxy config for any ODBC connectivity to hive from any BI tools. - Vijay M
... View more
03-13-2019
06:22 PM
Hello Vijay, Please see [1]. This use case isn't supported. However, shared error suggests that executor isn't able to connect to Impala daemon due to authenitcation issues. This is because executor is running in a separate JVM and should acquire Kerberos TGT as well. In order to perform this, you could make use of jaas configuration, see [2] and search for "To set up the JAAS login configuration file" (page 15). Once you have a tested Jaas login configuration and a keytab file, you could pass it as follows to the executors. --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=jaas.conf -Djavax.security.auth.useSubjectCredsOnly=false" \ --conf "spark.yarn.dist.files=<path_to_keytab>.keytab,<path_to_keytab>/jaas.conf" Alternatively, if your Impala can authenticate using LDAP, you could also test using it. Hope this helps! Thanks, Sudarshan [1] https://www.cloudera.com/documentation/spark2/latest/topics/spark2_known_issues.html#ki_jdbc_datasource_api [2] https://www.cloudera.com/documentation/other/connectors/impala-jdbc/latest/Cloudera-JDBC-Driver-for-Impala-Install-Guide.pdf
... View more
03-10-2019
11:52 PM
@bgooley Can you help me with some example for user_attribute_mapping='{"uid":"username"}' ? My requirement is: Post integrating with SAML when user try to access Hue UI, If IDP returns UID of user post authentication but when user logged in Hue UI shows Username which i want to be displayed. Can you help me to understand about below property which says user_attribute_mapping - Map user attributed provided by IDP to Hue attributes What is Hue attributes? - Vijay M
... View more
10-24-2018
05:23 AM
I able to fix the authentication error, It was because user was not present on all nodes of cluster. Ran the job again and its got failed. Below provided job application log. I able to figure out few warnings and GC allocation errors. Can someone also review and confirm what the problem is ? $ yarn logs -applicationId application_1540368383755_0019 Container: container_e50_1540368383755_0019_01_000002 on psbx05xv00app.she.pwj.com_8041 ========================================================================================= LogType:container-localizer-syslog Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:0 Log Contents: LogType:stderr Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:886 Log Contents: Picked up JAVA_TOOL_OPTIONS: -Doracle.jdbc.thinLogonCapability=o3 -Djava.security.krb5.conf=/etc/krb5_bds.conf log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapred.YarnChild). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/sandbox/bds/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/sandbox/bds/data/yarn/nm/01/usercache/p_diku/appcache/application_1540368383755_0019/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] LogType:stdout Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:1493 Log Contents: 1.294: [GC (Allocation Failure) [PSYoungGen: 131584K->13538K(153088K)] 131584K->13626K(502784K), 0.0173503 secs] [Times: user=0.15 sys=0.04, real=0.01 secs] 1.555: [GC (Metadata GC Threshold) [PSYoungGen: 63824K->7859K(153088K)] 63912K->7955K(502784K), 0.0127835 secs] [Times: user=0.20 sys=0.03, real=0.01 secs] 1.568: [Full GC (Metadata GC Threshold) [PSYoungGen: 7859K->0K(153088K)] [ParOldGen: 96K->7129K(349696K)] 7955K->7129K(502784K), [Metaspace: 20971K->20971K(1069056K)], 0.0369262 secs] [Times: user=0.44 sys=0.01, real=0.04 secs] POST 0: Entered run POST 11: After setEmbeddedH2OConfig 32.093: [GC (Allocation Failure) [PSYoungGen: 131584K->8382K(153088K)] 138713K->15519K(502784K), 0.0095201 secs] [Times: user=0.05 sys=0.01, real=0.01 secs] Heap PSYoungGen total 153088K, used 30980K [0x00000000f5580000, 0x0000000100000000, 0x0000000100000000) eden space 131584K, 17% used [0x00000000f5580000,0x00000000f6b91878,0x00000000fd600000) from space 21504K, 38% used [0x00000000fd600000,0x00000000fde2f9a0,0x00000000feb00000) to space 21504K, 0% used [0x00000000feb00000,0x00000000feb00000,0x0000000100000000) ParOldGen total 349696K, used 7137K [0x00000000e0000000, 0x00000000f5580000, 0x00000000f5580000) object space 349696K, 2% used [0x00000000e0000000,0x00000000e06f85a8,0x00000000f5580000) Metaspace used 28606K, capacity 28822K, committed 29104K, reserved 1075200K class space used 3348K, capacity 3437K, committed 3504K, reserved 1048576K Container: container_e50_1540368383755_0019_01_000001 on psbx13xv00app.she.pwj.com_8041 ========================================================================================= LogType:container-localizer-syslog Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:0 Log Contents: LogType:stderr Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:2933 Log Contents: Picked up JAVA_TOOL_OPTIONS: -Doracle.jdbc.thinLogonCapability=o3 -Djava.security.krb5.conf=/etc/krb5_bds.conf SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/sandbox/bds/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/sandbox/bds/data/yarn/nm/01/usercache/p_diku/appcache/application_1540368383755_0019/filecache/11/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Oct 24, 2018 8:09:29 AM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information. Oct 24, 2018 8:09:29 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class Oct 24, 2018 8:09:29 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Oct 24, 2018 8:09:29 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class Oct 24, 2018 8:09:29 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Oct 24, 2018 8:09:29 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" Oct 24, 2018 8:09:30 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" Oct 24, 2018 8:09:30 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest" log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. LogType:stdout Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:0 Log Contents: LogType:syslog Log Upload Time:Wed Oct 24 08:11:19 -0400 2018 LogLength:25758 Log Contents: 2018-10-24 08:09:16,307 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1540368383755_0019_000001 2018-10-24 08:09:16,510 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens: 2018-10-24 08:09:16,510 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@6caf0677) 2018-10-24 08:09:16,690 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsha, Ident: (token for p_diku: HDFS_DELEGATION_TOKEN owner=p_diku@BDS.UBS.COM, renewer=yarn, realUser=, issueDate=1540382949785, maxDate=1540987749785, sequenceNumber=104, masterKeyId=137) 2018-10-24 08:09:16,704 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter. 2018-10-24 08:09:16,705 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null 2018-10-24 08:09:16,736 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-24 08:09:16,736 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 2018-10-24 08:09:17,156 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-10-24 08:09:17,174 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-10-24 08:09:17,312 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler 2018-10-24 08:09:17,313 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher 2018-10-24 08:09:17,314 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher 2018-10-24 08:09:17,315 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher 2018-10-24 08:09:17,315 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler 2018-10-24 08:09:17,316 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher 2018-10-24 08:09:17,316 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter 2018-10-24 08:09:17,317 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter 2018-10-24 08:09:17,349 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hdfsha:8020] 2018-10-24 08:09:17,367 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hdfsha:8020] 2018-10-24 08:09:17,383 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hdfsha:8020] 2018-10-24 08:09:17,396 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled 2018-10-24 08:09:17,423 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler 2018-10-24 08:09:17,558 WARN [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Metrics system not started: org.apache.commons.configuration.ConfigurationException: Unable to load the configuration from the URL file:/run/cloudera-scm-agent/process/1785-yarn-NODEMANAGER/hadoop-metrics2.properties 2018-10-24 08:09:17,598 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1540368383755_0019 to jobTokenSecretManager 2018-10-24 08:09:17,700 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1540368383755_0019 because: not enabled; 2018-10-24 08:09:17,712 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1540368383755_0019 = 0. Number of splits = 1 2018-10-24 08:09:17,712 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1540368383755_0019 = 0 2018-10-24 08:09:17,712 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from NEW to INITED 2018-10-24 08:09:17,713 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1540368383755_0019. 2018-10-24 08:09:17,732 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100 2018-10-24 08:09:17,737 INFO [Socket Reader #1 for port 38385] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 38385 2018-10-24 08:09:17,750 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server 2018-10-24 08:09:17,774 INFO [IPC Server listener on 38385] org.apache.hadoop.ipc.Server: IPC Server listener on 38385: starting 2018-10-24 08:09:17,774 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-10-24 08:09:17,775 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at psbx13xv00app.she.pwj.com/10.79.0.71:38385 2018-10-24 08:09:17,820 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2018-10-24 08:09:17,825 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2018-10-24 08:09:17,833 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined 2018-10-24 08:09:17,851 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2018-10-24 08:09:17,887 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce 2018-10-24 08:09:17,887 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static 2018-10-24 08:09:17,891 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/* 2018-10-24 08:09:17,891 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/* 2018-10-24 08:09:17,900 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 36102 2018-10-24 08:09:17,900 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4 2018-10-24 08:09:17,930 INFO [main] org.mortbay.log: Extract jar:file:/sandbox/bds/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/jars/hadoop-yarn-common-2.6.0-cdh5.13.3.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_36102_mapreduce____v067m1/webapp 2018-10-24 08:09:18,168 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:36102 2018-10-24 08:09:18,169 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 36102 2018-10-24 08:09:18,378 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2018-10-24 08:09:18,383 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000 2018-10-24 08:09:18,384 INFO [Socket Reader #1 for port 34682] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 34682 2018-10-24 08:09:18,411 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2018-10-24 08:09:18,411 INFO [IPC Server listener on 34682] org.apache.hadoop.ipc.Server: IPC Server listener on 34682: starting 2018-10-24 08:09:18,428 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true 2018-10-24 08:09:18,428 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3 2018-10-24 08:09:18,428 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33 2018-10-24 08:09:18,508 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:248832, vCores:76> 2018-10-24 08:09:18,509 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.wma_default_grp 2018-10-24 08:09:18,514 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500 2018-10-24 08:09:18,514 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10 2018-10-24 08:09:18,525 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from INITED to SETUP 2018-10-24 08:09:18,527 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP 2018-10-24 08:09:18,539 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from SETUP to RUNNING 2018-10-24 08:09:18,564 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1540368383755_0019_m_000000 Task Transitioned from NEW to SCHEDULED 2018-10-24 08:09:18,565 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED 2018-10-24 08:09:18,566 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:563, vCores:1> 2018-10-24 08:09:18,675 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1540368383755_0019, File: hdfs://hdfsha:8020/user/p_diku/.staging/job_1540368383755_0019/job_1540368383755_0019_1.jhist 2018-10-24 08:09:18,752 INFO [Socket Reader #1 for port 38385] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for p_diku@BDS.UBS.COM (auth:SIMPLE) 2018-10-24 08:09:18,766 INFO [Socket Reader #1 for port 38385] SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for p_diku@BDS.UBS.COM (auth:TOKEN) for protocol=interface org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB 2018-10-24 08:09:18,884 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://hdfsha:8020] 2018-10-24 08:09:19,512 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0 2018-10-24 08:09:19,554 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1540368383755_0019: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:1479680, vCores:453> knownNMs=6 2018-10-24 08:09:20,566 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1 2018-10-24 08:09:20,605 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_e50_1540368383755_0019_01_000002 to attempt_1540368383755_0019_m_000000_0 2018-10-24 08:09:20,607 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2018-10-24 08:09:20,674 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar file on the remote FS is hdfs://hdfsha/user/p_diku/.staging/job_1540368383755_0019/job.jar 2018-10-24 08:09:20,676 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/p_diku/.staging/job_1540368383755_0019/job.xml 2018-10-24 08:09:20,677 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #1 tokens and #1 secret keys for NM use for launching container 2018-10-24 08:09:20,677 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 2 2018-10-24 08:09:20,677 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData 2018-10-24 08:09:20,703 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED 2018-10-24 08:09:20,706 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_e50_1540368383755_0019_01_000002 taskAttempt attempt_1540368383755_0019_m_000000_0 2018-10-24 08:09:20,709 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1540368383755_0019_m_000000_0 2018-10-24 08:09:20,779 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1540368383755_0019_m_000000_0 : 13562 2018-10-24 08:09:20,780 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1540368383755_0019_m_000000_0] using containerId: [container_e50_1540368383755_0019_01_000002 on NM: [psbx05xv00app.she.pwj.com:8041] 2018-10-24 08:09:20,785 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING 2018-10-24 08:09:20,785 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1540368383755_0019_m_000000 Task Transitioned from SCHEDULED to RUNNING 2018-10-24 08:09:21,609 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1540368383755_0019: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:1476608, vCores:452> knownNMs=6 2018-10-24 08:09:25,403 INFO [Socket Reader #1 for port 34682] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1540368383755_0019 (auth:SIMPLE) 2018-10-24 08:09:25,418 INFO [Socket Reader #1 for port 34682] SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for job_1540368383755_0019 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapred.TaskUmbilicalProtocol 2018-10-24 08:09:25,427 INFO [IPC Server handler 0 on 34682] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1540368383755_0019_m_54975581388802 asked for a task 2018-10-24 08:09:25,428 INFO [IPC Server handler 0 on 34682] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1540368383755_0019_m_54975581388802 given task: attempt_1540368383755_0019_m_000000_0 2018-10-24 08:09:38,161 INFO [IPC Server handler 7 on 34682] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1540368383755_0019_m_000000_0 is : 0.0 2018-10-24 08:10:38,261 INFO [IPC Server handler 11 on 34682] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1540368383755_0019_m_000000_0 is : 0.0 2018-10-24 08:11:12,164 INFO [IPC Server handler 0 on 38385] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Kill job job_1540368383755_0019 received from p_diku@BDS.UBS.COM (auth:TOKEN) at 10.32.44.96 2018-10-24 08:11:12,165 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from RUNNING to KILL_WAIT 2018-10-24 08:11:12,166 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1540368383755_0019_m_000000 Task Transitioned from RUNNING to KILL_WAIT 2018-10-24 08:11:12,167 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from RUNNING to KILL_CONTAINER_CLEANUP 2018-10-24 08:11:12,168 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e50_1540368383755_0019_01_000002 taskAttempt attempt_1540368383755_0019_m_000000_0 2018-10-24 08:11:12,168 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1540368383755_0019_m_000000_0 2018-10-24 08:11:12,189 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from KILL_CONTAINER_CLEANUP to KILL_TASK_CLEANUP 2018-10-24 08:11:12,190 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT 2018-10-24 08:11:12,206 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1540368383755_0019_m_000000_0 TaskAttempt Transitioned from KILL_TASK_CLEANUP to KILLED 2018-10-24 08:11:12,222 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1540368383755_0019_m_000000 Task Transitioned from KILL_WAIT to KILLED 2018-10-24 08:11:12,224 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1 2018-10-24 08:11:12,225 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from KILL_WAIT to KILL_ABORT 2018-10-24 08:11:12,226 INFO [CommitterEvent Processor #2] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_ABORT 2018-10-24 08:11:12,233 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1540368383755_0019Job Transitioned from KILL_ABORT to KILLED 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true 2018-10-24 08:11:12,234 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services 2018-10-24 08:11:12,235 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 2 2018-10-24 08:11:12,242 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event TASK_FAILED 2018-10-24 08:11:12,246 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, writing event JOB_KILLED 2018-10-24 08:11:12,295 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hdfsha:8020/user/p_diku/.staging/job_1540368383755_0019/job_1540368383755_0019_1.jhist to hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019-1540382951606-p_diku-H2O_31727-1540383072224-0-0-KILLED-root.wma_default_grp-1540382958519.jhist_tmp 2018-10-24 08:11:12,333 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019-1540382951606-p_diku-H2O_31727-1540383072224-0-0-KILLED-root.wma_default_grp-1540382958519.jhist_tmp 2018-10-24 08:11:12,336 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://hdfsha:8020/user/p_diku/.staging/job_1540368383755_0019/job_1540368383755_0019_1_conf.xml to hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019_conf.xml_tmp 2018-10-24 08:11:12,371 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019_conf.xml_tmp 2018-10-24 08:11:12,395 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019.summary_tmp to hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019.summary 2018-10-24 08:11:12,397 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019_conf.xml_tmp to hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019_conf.xml 2018-10-24 08:11:12,399 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019-1540382951606-p_diku-H2O_31727-1540383072224-0-0-KILLED-root.wma_default_grp-1540382958519.jhist_tmp to hdfs://hdfsha:8020/user/history/done_intermediate/p_diku/job_1540368383755_0019-1540382951606-p_diku-H2O_31727-1540383072224-0-0-KILLED-root.wma_default_grp-1540382958519.jhist 2018-10-24 08:11:12,399 INFO [Thread-71] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop() 2018-10-24 08:11:12,400 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Kill job job_1540368383755_0019 received from p_diku@BDS.UBS.COM (auth:TOKEN) at 10.32.44.96 Job received Kill while in RUNNING state. 2018-10-24 08:11:12,401 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is https://a302-0044-9735.she.pwj.com:19890/jobhistory/job/job_1540368383755_0019 2018-10-24 08:11:12,406 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered. 2018-10-24 08:11:13,407 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2018-10-24 08:11:13,409 INFO [Thread-71] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://hdfsha /user/p_diku/.staging/job_1540368383755_0019 2018-10-24 08:11:13,412 INFO [Thread-71] org.apache.hadoop.ipc.Server: Stopping server on 34682 2018-10-24 08:11:13,413 INFO [IPC Server listener on 34682] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 34682 2018-10-24 08:11:13,413 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2018-10-24 08:11:13,413 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted 2018-10-24 08:11:13,414 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted 2018-10-24 08:11:18,414 INFO [Thread-71] org.apache.hadoop.ipc.Server: Stopping server on 38385 2018-10-24 08:11:18,415 INFO [IPC Server listener on 38385] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 38385 2018-10-24 08:11:18,415 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder 2018-10-24 08:11:18,418 INFO [Thread-71] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0
... View more