Member since
12-12-2013
33
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
941 | 12-03-2017 06:59 PM |
01-13-2018
03:55 AM
@pdarvasi Managed disks have Storage Service Encryption enabled by default. Have not had a chance to look at Azure Disk Encryption in this context but need to for a customer as well for HDP. Would help to get some additional details - perhaps via Paige, so we can get product/engineering team involved if needed.
... View more
01-13-2018
03:49 AM
@pdarvasi Sorry for the late response. Had a chance to try this out. Version: local version:2.2.0-96149cf latest release:1.16.5 (1) UI looks great (thumbs up!) (2) Ran into a bug - the "create cluster" button would not get enabled whatsoever :)) - with and without "enable Kerberos" checked - template feature is missing - and the SSH key was randomly showing in red or green if I tabbed through it randomly - simple, tried and tested ambari blueprint - existing vnet, subnet, nsg I duodecuple-checked my entries to ensure accuracy and completeness but to no avail. 🙂 Appreciate the effort that has gone into this version. Look forward to fixes and to trying out test KDC and existing KDC.
... View more
01-05-2018
05:51 PM
@fschneider @pdarvasi In the latest Azure release of cloudbreak - Jan 2018 (sorry don't have version handy), I noticed that my custom DNS server was not being used. Ran into this when I was attempting to enable dynamic DNS updates, domain join and AD authentication via sssd. I had to do the following to get it to work... sed -i s/127.0.0.1/<my DNS server IP>/g /etc/dhcp/dhclient-enter-hooks
sed -i s/"search ${search}"/"search ${search} <my domain> "/g /etc/dhcp/dhclient-enter-hook
... View more
12-14-2017
07:54 PM
@pdarvasi Attempted: HDP 2.6, existing Vnet/subnet (with custom DNS server), availability sets, Kerberos with new MIT Kerberos KDC Simple Ambari blueprint- tried and tested Cloudbreak 1.16.5 Issue: After 12+ hours - provisioning did not complete. Some errors: Ambari component version: The cluster's current version could not be determined Services say installation pending Zookeeper shows restart required but when restart is attempted - service invalidate state error with message HOST_SVCCOMP_OP_IN_PROGRESS at INIT Note: I was able to kerberize post cluster provisioning without issues in a separate instance. Any insight is appreciated. Thanks.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
12-14-2017
05:45 AM
1 Kudo
@pdarvasi: (1) Availability set (1 for multiple host groups - masters): Works fine (2) Linux kernel version: is the one in your screenshot - version with patch (3) Kerberos enabled post-provisioning against MIT Kerberos KDC: No issues (4) Kerberos enabled at provision time with new MIT Kerberos KDC: Did not complete in 12 hours..lots of alerts, install pending, and services in invalid state and unable to start, Ambari version could not be determined...will open a separate post
... View more
12-12-2017
04:46 PM
@pdarvasi - testing it today.
... View more
12-07-2017
03:08 PM
@pdarvasi Let me take a look and get back to you on this (shocked!)
... View more
12-06-2017
03:21 PM
A great feature would be to be able to specify whether to encrypt data disks at provision time with Azure Disk Encryption. Please share if this is in the roadmap. Thanks.
... View more
Labels:
12-06-2017
03:19 PM
@pdarvasi Thanks!
... View more
12-06-2017
03:17 PM
@pdarvasi Will try out and get back to you. Thanks so much!
... View more
12-05-2017
05:14 AM
@pdarvasi We appreciate the speed at which the Cloudbreak team is rolling out changes for Azure. Will report back any issues I run into. Can you please confirm if it includes a fix for this? - Linux kernel version... Thanks
... View more
12-03-2017
06:59 PM
Solution: VM FQDN needs to be shorter than what you get with Azure defaults. This is not a Cloudbreak issue.
... View more
12-03-2017
04:33 PM
Hello, Issue: Around mid-November 2017, ran into an issue with datanodes not coming up, and namenode therefore not coming out of safe node when we kerberized (MIT kerberos KDC) a cluster after we provisioned via Cloudbreak. Reference: https://issues.apache.org/jira/browse/HDFS-12029 https://access.redhat.com/errata/RHBA-2017:1674 https://community.hortonworks.com/articles/109940/after-os-patching-all-the-datanodes-nodes-are-up-b.html Temporary Fix: Hortonworks support was consulted, and they applied a temporary fix: Updated hadoop-env.sh configs in HDFS configs in ambari as follows: Original entry: export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true
${HADOOP_OPTS}" With fix: export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Xss1280k ${HADOOP_OPTS}” Restarted services and we were good to go. Strategic solution: Upgrade the underlying OS image to a version that includes a patch. 3.10.0-514.26.2.el7.x86_64
version, per the support engineer Reporting this since an upgrade is being worked, should have done it earlier 😞 @pdarvasi
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
12-03-2017
04:21 PM
@pdarvasi & @jeff Would be great if we had at least the ARM template out in the marketplace - we currently dont have any offering from Hortonworks to spin up a cluster. Cloudera has Director and an ARM template. Would be great if Hortonworks had a similar model...especially for situations like this.
... View more
12-03-2017
02:35 AM
Just found Cloudbreak missing in the marketplace. Please let me know if this is a Microsoft issue so I can escalate internally. I am in the middle of a PoC with strict timelines and am pretty badly impacted. Your support is much appreciated.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
12-02-2017
06:55 AM
Attempting to create a HDP cluster with Kerberos at provision time against AD failed. Issue is tied to the same as one reported - very long VM FQDN - exceeding upper limits defined in AD, AAD DS
... View more
11-29-2017
05:15 AM
Provisioned a cluster on Azure using Cloudbreak and then... Attempted: Kerberize the cluster using Ambari Kerberos automatic wizard, against an existing Active Directory prepped ahead of time Issue: The kerberos set up fails when it tries to create a SPN for zookeeper. The error seems to point to length of CN exceeding max length limit. STDERR from Ambari Kerberos wizard UI: 2017-11-28 16:41:58,340 - Failed to create principal, zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM -
Can not create principal : zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM STDOUT from Ambari Kerberos wizard UI: 2017-11-28 16:41:57,944 - Processing identities...
2017-11-28 16:41:58,019 - Processing principal, HTTP/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,021 - Principal, HTTP/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,048 - Processing principal, ambari-qa-denali@DENALI.COM
2017-11-28 16:41:58,049 - Principal, ambari-qa-denali@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,076 - Processing principal, hdfs-denali@DENALI.COM
2017-11-28 16:41:58,077 - Principal, hdfs-denali@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,104 - Processing principal, dn/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,106 - Principal, dn/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,133 - Processing principal, nm/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,134 - Principal, nm/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,163 - Processing principal, hive/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,165 - Principal, hive/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,193 - Processing principal, HTTP/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,195 - Principal, HTTP/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,221 - Processing principal, yarn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,222 - Principal, yarn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,248 - Processing principal, hive/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,249 - Principal, hive/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,276 - Processing principal, jn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,278 - Principal, jn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,306 - Processing principal, rm/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
2017-11-28 16:41:58,307 - Principal, rm/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM, already exists, setting new password
2017-11-28 16:41:58,334 - Processing principal, zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM Just to show that several SPs got created, it consistently fails at zookeeper. Troubleshooting attempted: Reduced zookeeper to zk, got past the error, only to fail for amshbase, reduced this to amshb, got past the setup. Failed during smoke testing; We cannot be changing service principal names, this was merely to test the hypothesis that it was length related. Ambari log: 29 Nov 2017 00:47:08,143 INFO [Server Action Executor Worker 464] StackAdvisorRunner:71 - advisor script stderr:
29 Nov 2017 00:47:08,152 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service SQOOP=[SQOOP] to auth to local mapping
29 Nov 2017 00:47:08,152 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component SQOOP to auth to local mapping
29 Nov 2017 00:47:08,152 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service HDFS=[HDFS_CLIENT, ZKFC, DATANODE, JOURNALNODE, NAMENODE] to auth to local mapping
29 Nov 2017 00:47:08,152 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component HDFS_CLIENT to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component DATANODE to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component JOURNALNODE to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component NAMENODE to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service TEZ=[TEZ_CLIENT] to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component TEZ_CLIENT to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service MAPREDUCE2=[MAPREDUCE2_CLIENT, HISTORYSERVER] to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component HISTORYSERVER to auth to local mapping
29 Nov 2017 00:47:08,153 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service ZOOKEEPER=[ZOOKEEPER_SERVER, ZOOKEEPER_CLIENT] to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component ZOOKEEPER_SERVER to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service YARN=[NODEMANAGER, YARN_CLIENT, APP_TIMELINE_SERVER, RESOURCEMANAGER] to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component NODEMANAGER to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component APP_TIMELINE_SERVER to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component RESOURCEMANAGER to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service KERBEROS=[KERBEROS_CLIENT] to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component KERBEROS_CLIENT to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service PIG=[PIG] to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component PIG to auth to local mapping
29 Nov 2017 00:47:08,154 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service HIVE=[HIVE_SERVER, MYSQL_SERVER, HIVE_METASTORE, HIVE_CLIENT, WEBHCAT_SERVER] to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component HIVE_SERVER to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component HIVE_METASTORE to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component WEBHCAT_SERVER to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service SLIDER=[SLIDER] to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component SLIDER to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service AMBARI_METRICS=[METRICS_MONITOR, METRICS_COLLECTOR] to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component METRICS_COLLECTOR to auth to local mapping
29 Nov 2017 00:47:08,155 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service SMARTSENSE=[HST_AGENT, HST_SERVER] to auth to local mapping
29 Nov 2017 00:47:08,156 INFO [Server Action Executor Worker 464] KerberosHelperImpl:950 - Adding identities for service SPARK2=[SPARK2_CLIENT, SPARK2_JOBHISTORYSERVER] to auth to local mapping
29 Nov 2017 00:47:08,156 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component SPARK2_CLIENT to auth to local mapping
29 Nov 2017 00:47:08,156 INFO [Server Action Executor Worker 464] KerberosHelperImpl:967 - Adding identities for component SPARK2_JOBHISTORYSERVER to auth to local mapping
29 Nov 2017 00:47:08,557 INFO [Server Action Executor Worker 465] KerberosServerAction:353 - Processing identities...
29 Nov 2017 00:47:08,629 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, HTTP/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,657 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, hdfs-denali@DENALI.COM
29 Nov 2017 00:47:08,684 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, dn/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,713 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, nm/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,740 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, hive/den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,768 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, HTTP/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,796 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, yarn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,824 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, hive/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,852 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, rm/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,879 INFO [Server Action Executor Worker 465] CreatePrincipalsServerAction:203 - Processing principal, zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
29 Nov 2017 00:47:08,885 ERROR [Server Action Executor Worker 465] CreatePrincipalsServerAction:297 - Failed to create principal, zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM - Can not create principal : zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Can not create principal : zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM
at org.apache.ambari.server.serveraction.kerberos.ADKerberosOperationHandler.createPrincipal(ADKerberosOperationHandler.java:331)
at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.createPrincipal(CreatePrincipalsServerAction.java:256)
at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:159)
at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processRecord(KerberosServerAction.java:532)
at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:414)
at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:91)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:555)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:492)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.naming.directory.InvalidAttributeValueException: [LDAP: error code 19 - 00002082: AtrErr: DSID-031519A3, #1:
0: 00002082: DSID-031519A3, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 3 (cn):len 138
]; remaining name '"cn=zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net,OU=hdpou,DC=denali,DC=com"'
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3149)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3082)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2888)
at com.sun.jndi.ldap.LdapCtx.c_createSubcontext(LdapCtx.java:812)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_createSubcontext(ComponentDirContext.java:341)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.createSubcontext(PartialCompositeDirContext.java:268)
at javax.naming.directory.InitialDirContext.createSubcontext(InitialDirContext.java:202)
at org.apache.ambari.server.serveraction.kerberos.ADKerberosOperationHandler.createPrincipal(ADKerberosOperationHandler.java:329)
... 8 more
29 Nov 2017 00:47:08,886 INFO [Server Action Executor Worker 465] KerberosServerAction:457 - Processing identities completed.
29 Nov 2017 00:47:09,559 ERROR [ambari-action-scheduler] ActionScheduler:440 - Operation completely failed, aborting request id: 39
29 Nov 2017 00:47:09,560 INFO [ambari-action-scheduler] ActionScheduler:952 - Service name is , component name is AMBARI_SERVER_ACTIONskipping sending ServiceComponentHostOpFailedEvent for AMBARI_SERVER_ACTION
29 Nov 2017 00:47:09,585 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname null role AMBARI_SERVER_ACTION requestId 39 taskId 466 stageId 2
29 Nov 2017 00:47:09,585 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname null role AMBARI_SERVER_ACTION requestId 39 taskId 467 stageId 3
29 Nov 2017 00:47:09,585 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-e0.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 468 stageId 4
29 Nov 2017 00:47:09,585 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-m1.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 469 stageId 4
29 Nov 2017 00:47:09,585 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-m12.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 470 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 471 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-m34.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 472 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-s15.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 473 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-s16.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 474 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname den-s17.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net role KERBEROS_CLIENT requestId 39 taskId 475 stageId 4
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname null role AMBARI_SERVER_ACTION requestId 39 taskId 476 stageId 5
29 Nov 2017 00:47:09,586 INFO [ambari-action-scheduler] ActionDBAccessorImpl:218 - Aborting command. Hostname null role AMBARI_SERVER_ACTION requestId 39 taskId 477 stageId 6
29 Nov 2017 00:48:41,263 INFO [pool-18-thread-1] MetricsServiceImpl:64 - Checking for metrics sink initialization Deduction: The length is beyond the limit acceptable by Active Directory OK:
yarn/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM FAILS:
zookeeper/den-m23.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM amshbase/den-m1.rxo2hisyweyefnkiphzw3u2whg.cx.internal.cloudapp.net@DENALI.COM Question: (1) Anyone run into this issue that has a solution to share? I know I can pop a MIT Kerberos KDC in front of AD...looking for options. (2) Does the Cloudbreak team have any guidance? Thanks in advance. I am now attempting to provision via Cloudbreak - kerberize at provision-time against existing Active Directory. Fingers crossed.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
11-09-2017
09:34 PM
@pdarvasi Availability set support for masters, premium managed disks support, and GA of Kerberos will drive increased adoption of CloudBreak on Azure. We look forward to support for the same.
... View more
11-09-2017
09:30 PM
@pdarvasi - thanks so much!
... View more
10-20-2017
08:08 PM
@jeff: The context here is Cloudbreak - provisioning HDP using Cloudbreak. Some of our enterprise customers have a requirement to use RHEL. Can Cloudbreak be configured to use a RHEL image available in Azure marketplace for the cluster nodes instead of the default?
... View more
10-17-2017
02:28 AM
Hello,
Can someone please share a blueprint for Hive HA?
The blueprint I am trying, pasted below, gives me the error -
Failed to create cluster: Incorrect number of 'HIVE_SERVER' components are in '[master_2, master_3]' hostgroups: count: 2, min: 1 max: 1
I did see this, and this, new to Ambari blueprints and would like to start with min. conf.
Any help is much appreciated.
{
"Blueprints": {
"blueprint_name": "ha-trials",
"stack_name": "HDP",
"stack_version": "2.6"
},
"host_groups": [
{
"name": "edge",
"cardinality": "1",
"components": [
{
"name": "HDFS_CLIENT"
},
{
"name": "MAPREDUCE2_CLIENT"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "TEZ_CLIENT"
},
{
"name": "YARN_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "PIG"
},
{
"name": "SQOOP"
},
{
"name": "SLIDER"
},
{
"name": "HIVE_CLIENT"
}
]
},
{
"name": "master_1",
"cardinality": "1",
"components": [
{
"name": "HISTORYSERVER"
},
{
"name": "JOURNALNODE"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "NAMENODE"
},
{
"name": "ZKFC"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "SLIDER"
}
]
},
{
"name": "master_2",
"cardinality": "1",
"components": [
{
"name": "APP_TIMELINE_SERVER"
},
{
"name": "JOURNALNODE"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "RESOURCEMANAGER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "MYSQL_SERVER"
},
{
"name": "HIVE_SERVER"
},
{
"name": "HIVE_METASTORE"
},
{
"name": "WEBHCAT_SERVER"
},
{
"name": "TEZ_CLIENT"
},
{
"name": "HIVE_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
}
]
},
{
"name": "master_3",
"cardinality": "1",
"components": [
{
"name": "JOURNALNODE"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "NAMENODE"
},
{
"name": "ZKFC"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "RESOURCEMANAGER"
},
{
"name": "HIVE_SERVER"
},
{
"name": "HIVE_METASTORE"
},
{
"name": "WEBHCAT_SERVER"
},
{
"name": "HCAT"
},
{
"name": "HIVE_CLIENT"
}
]
},
{
"name": "slave_1",
"components": [
{
"name": "DATANODE"
},
{
"name": "METRICS_MONITOR"
},
{
"name": "NODEMANAGER"
},
{
"name": "TEZ_CLIENT"
},
{
"name": "HIVE_CLIENT"
}
],
"cardinality": "3+"
},
{
"name": "management",
"configurations": [],
"cardinality": "3+",
"components": [
{
"name": "METRICS_MONITOR"
},
{
"name": "METRICS_COLLECTOR"
}
]
}
],
"configurations": [
{
"core-site": {
"properties": {
"fs.defaultFS": "hdfs://mycluster",
"ha.zookeeper.quorum": "%HOSTGROUP::master_1%:2181,%HOSTGROUPHOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181",
"fs.trash.interval": "4320"
}
}
},
{
"hdfs-site": {
"properties": {
"dfs.client.failover.proxy.provider.mycluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled": "true",
"dfs.ha.fencing.methods": "shell(/bin/true)",
"dfs.ha.namenodes.mycluster": "nn1,nn2",
"dfs.namenode.http-address": "%HOSTGROUP::master_1%:50070",
"dfs.namenode.http-address.mycluster.nn1": "%HOSTGROUP::master_1%:50070",
"dfs.namenode.http-address.mycluster.nn2": "%HOSTGROUP::master_3%:50070",
"dfs.namenode.https-address": "%HOSTGROUP::master_1%:50470",
"dfs.namenode.https-address.mycluster.nn1": "%HOSTGROUP::master_1%:50470",
"dfs.namenode.https-address.mycluster.nn2": "%HOSTGROUP::master_3%:50470",
"dfs.namenode.rpc-address.mycluster.nn1": "%HOSTGROUP::master_1%:8020",
"dfs.namenode.rpc-address.mycluster.nn2": "%HOSTGROUP::master_3%:8020",
"dfs.namenode.shared.edits.dir": "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/mycluster",
"dfs.nameservices": "mycluster",
"dfs.namenode.safemode.threshold-pct": "0.99"
}
}
},
{
"yarn-site": {
"properties": {
"hadoop.registry.rm.enabled": "false",
"hadoop.registry.zk.quorum": "%HOSTGROUP::master_3%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181",
"yarn.log.server.url": "http://%HOSTGROUP::master_2%:19888/jobhistory/logs",
"yarn.resourcemanager.address": "%HOSTGROUP::master_2%:8050",
"yarn.resourcemanager.admin.address": "%HOSTGROUP::master_2%:8141",
"yarn.resourcemanager.cluster-id": "yarn-cluster",
"yarn.resourcemanager.ha.automatic-failover.zk-base-path": "/yarn-leader-election",
"yarn.resourcemanager.ha.enabled": "true",
"yarn.resourcemanager.ha.rm-ids": "rm1,rm2",
"yarn.resourcemanager.hostname": "%HOSTGROUP::master_2%",
"yarn.resourcemanager.recovery.enabled": "true",
"yarn.resourcemanager.resource-tracker.address": "%HOSTGROUP::master_2%:8025",
"yarn.resourcemanager.scheduler.address": "%HOSTGROUP::master_2%:8030",
"yarn.resourcemanager.store.class": "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
"yarn.resourcemanager.webapp.address": "%HOSTGROUP::master_2%:8088",
"yarn.resourcemanager.webapp.https.address": "%HOSTGROUP::master_2%:8090",
"yarn.timeline-service.address": "%HOSTGROUP::master_2%:10200",
"yarn.timeline-service.webapp.address": "%HOSTGROUP::master_2%:8188",
"yarn.timeline-service.webapp.https.address": "%HOSTGROUP::master_2%:8190",
"yarn.resourcemanager.zk-address": "%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_3%:2181",
"yarn.resourcemanager.hostname.rm1": "%HOSTGROUP::master_2%",
"yarn.resourcemanager.hostname.rm2": "%HOSTGROUP::master_3%",
"yarn.acl.enable": "true"
}
}
},
{
"hive-env": {
"properties": {
"cost_based_optimizer": "On",
"hcat_log_dir": "/var/log/webhcat",
"hcat_pid_dir": "/var/run/webhcat",
"hcat_user": "hcat",
"hive_ambari_database": "MySQL",
"hive_database": "New MySQL Database",
"hive_database_name": "hive",
"hive_database_type": "mysql",
"hive_exec_orc_storage_strategy": "SPEED",
"hive_log_dir": "/var/log/hive",
"hive_metastore_port": "9083",
"hive_pid_dir": "/var/run/hive",
"hive_security_authorization": "None",
"hive_timeline_logging_enabled": "true",
"hive_txn_acid": "Off",
"hive_user": "hive",
"webhcat_user": "hcat"
}
}
},
{
"hive-site": {
"hive.exec.compress.output": "true",
"hive.merge.mapfiles": "true",
"hive.server2.tez.initialize.default.sessions": "true",
"hive.server2.transport.mode": "http",
"ambari.hive.db.schema.name": "hive",
"hive.zookeeper.client.port": "2181",
"hive.zookeeper.namespace": "hive_zookeeper_namespace",
"hive.zookeeper.quorum": "%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_3%:2181",
"javax.jdo.option.ConnectionDriverName": "com.mysql.jdbc.Driver",
"javax.jdo.option.ConnectionURL": "jdbc:mysql://%HOSTGROUP::master_2%/hive?createDatabaseIfNotExist=true",
"javax.jdo.option.ConnectionUserName": "hive"
}
}
]
}
... View more
Labels:
10-15-2017
04:12 PM
Hello- Attempted: Create a multi-master cluster, on Azure with masters and workers placed in master node availability set and worker node availability set respectively, Blueprint: Each master is assigned to a host group similar to this. Availability sets: In the "Configure cluster" tab of provisioning steps, created an availability set for masters called "as-masternodes". Issue: The "as-masternodes" availability set, is available for selection for only one master via the GUI after which it becomes unavailable for selection for other master nodes. Only one master ends up in the availability set specified for master nodes. Question: Please let me know how I can achieve adding all master nodes to the same availability set. Thanks.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
10-13-2017
05:11 PM
Thanks so much, @fschneider
... View more
10-12-2017
01:16 PM
A customer of mine would like to use their own DNS server with Cloudbreak on Azure. Can you please share how this can be configured if supported? If not supported, please share if it is a roadmap item. Thanks.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
10-12-2017
01:13 PM
A customer of mine will most likely bring their own RHEL 7.x custom image and put it in the marketplace, or use existing marketplace image with "Bring your own license" (BYOL) or "Pay as you go" (PAYG). Please share how any of these can be supported. Any best practices/tuning/conf that needs to be applied with the images? From a roadmap perspective, would be great if there are options we can choose from and if HWX can have images in the marketplace that are pre-tuned/configured readily available for use. 🙂
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
10-12-2017
01:09 PM
@pdarvasi: Thanks for the quick response. Yes- would like to use premium managed disks for master nodes. Also, in lower environments for workers, use standard managed disks, and in production, premium. This is a need for an customer. Can this be rolled out quickly? If yes, do share timelines. Thanks.
... View more
10-06-2017
03:20 AM
Hello, I am trying to create a custom template and pick premium managed disks for masters and standard managed disks for workers. The documentation details the option of selecting "Volume Type" for making such a distinction. https://hortonworks.github.io/cloudbreak-azure-docs/azure-config/index.html under "Custom template". However, in my Cloudbreak GUI, I dont see the "Volume type" dropdown. What am I missing here? Thanks in advance. Anagha
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-25-2017
08:40 PM
Looking for documentation on installing HDF on Azure. I see that there is no marketplace template and it will be a pure IaaS setup. This is for a PoC. Plan is to set up a 3 node NiFi-only cluster (no Kafka/Storm etc), with one management node for security/operations, leveraging Ambari to install NiFi. Looking for guidance specifically on these areas-
OS image to use on Azure Any OS level tuning/configuration that needs to be done Anything networking related besides Azure vnet Recommended foundational software with version – e.g. Java version and anything else Minimum config - VM SKU, disk SKU, for operations and
security node, and disk partitioning Minimum config - VM SKU, disk SKU, disk partitioning for NiFi nodes Any best practices Detailed documentation Thanks in advance.
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
08-18-2017
09:20 AM
Problem summary: I am unable to read from nested subdirectories from my Spark program, despite setting the required Hadoop configuration (see attempted). I get the error below (full error in gist – further below)- Exception in thread "main" java.io.FileNotFoundException: File /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/*/* does not exist. Any help is appreciated. Version: Spark 2.2.0 } CDH 5.12 (upgraded Spark, Java) Directory layout: $ hdfs dfs -ls -R /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/*/part* | awk '{print $8}' /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/batch_id=1502939225073/part-00000-3a44cd00-e895-4a01-9ab9-946064b739d4-c000.parquet /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/batch_id=1502939234036/part-00000-cbd47353-0590-4cc1-b10d-c18886df1c25-c000.parquet /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/batch_id=1502939238389/part-00000-a3d672fd-4b5c-4ad1-a85c-4c31829c3bd2-c000.parquet Input directory parameter passed: /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/*/* Attempted (1): Set parameter in code... val sparkSession: SparkSession = SparkSession.builder().master("yarn").getOrCreate() //Recursive glob support & loglevel import sparkSession.implicits._ sparkSession.sparkContext.hadoopConfiguration.setBoolean("spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive", true) Did not see the configuration in place in Spark UI. Attempted (2): Left the setting above as is, and added in spark-submit, on the CLI. I do see the configuration in the Spark UI, but same error – it cannot traverse into the directory structure.. Command: spark-submit --class com....bda.util.CompactParsedLogs --conf spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive=true ... Code: //Spark Session val sparkSession: SparkSession = SparkSession.builder().master("yarn").getOrCreate() //Recursive glob support val conf= new SparkConf() val cliRecursiveGlobConf=conf.get("spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive") import sparkSession.implicits._ sparkSession.sparkContext.hadoopConfiguration.set("spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive", cliRecursiveGlobConf) Error & overall output: 17/08/18 15:59:15 INFO spark.SparkContext: Running Spark version 2.2.0.cloudera1 17/08/18 15:59:16 INFO spark.SparkContext: Submitted application: com.chamberlain.bda.util.CompactParsedLogs 17/08/18 15:59:16 INFO spark.SecurityManager: Changing view acls to: akhanolk 17/08/18 15:59:16 INFO spark.SecurityManager: Changing modify acls to: akhanolk 17/08/18 15:59:16 INFO spark.SecurityManager: Changing view acls groups to: 17/08/18 15:59:16 INFO spark.SecurityManager: Changing modify acls groups to: 17/08/18 15:59:16 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(akhanolk); groups with view permissions: Set(); users with modify permissions: Set(akhanolk); groups with modify permissions: Set() 17/08/18 15:59:16 INFO util.Utils: Successfully started service 'sparkDriver' on port 45481. 17/08/18 15:59:16 INFO spark.SparkEnv: Registering MapOutputTracker 17/08/18 15:59:16 INFO spark.SparkEnv: Registering BlockManagerMaster 17/08/18 15:59:16 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 17/08/18 15:59:16 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 17/08/18 15:59:16 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-6f104040-1a4a-4645-a545-4d73da098e94 17/08/18 15:59:16 INFO memory.MemoryStore: MemoryStore started with capacity 912.3 MB 17/08/18 15:59:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator 17/08/18 15:59:16 INFO util.log: Logging initialized @2062ms 17/08/18 15:59:17 INFO server.Server: jetty-9.3.z-SNAPSHOT 17/08/18 15:59:17 INFO server.Server: Started @2149ms 17/08/18 15:59:17 INFO server.AbstractConnector: Started ServerConnector@28c0b664{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 17/08/18 15:59:17 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d4d8fcf{/jobs,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1cefc4b3{/jobs/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6f6a7463{/jobs/job,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6ca320ab{/jobs/job/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e53135d{/stages,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3a7704c{/stages/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@619bd14c{/stages/stage,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7561db12{/stages/stage/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24b52d3e{/stages/pool,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6e9c413e{/stages/pool/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5af5def9{/storage,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@36dce7ed{/storage/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33d05366{/storage/rdd,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7692cd34{/storage/rdd/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@32c0915e{/environment,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@70f43b45{/environment/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10ad20cb{/executors,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2c282004{/executors/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7bfc3126{/executors/threadDump,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53bc1328{/executors/threadDump/json,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3c1e3314{/static,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f3c966c{/,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4102b1b1{/api,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77b325b3{/jobs/job/kill,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e8e8651{/stages/stage/kill,null,AVAILABLE,@Spark} 17/08/18 15:59:17 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.5.0.5:4040 17/08/18 15:59:17 INFO spark.SparkContext: Added JAR file:/home/akhanolk/apps/myqIngest/streaming/MyQIngest-1.0.jar at spark://10.5.0.5:45481/jars/MyQIngest-1.0.jar with timestamp 1503071957190 17/08/18 15:59:17 INFO util.Utils: Using initial executors = 3, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 17/08/18 15:59:18 INFO client.RMProxy: Connecting to ResourceManager at cdh-mn-2b4cb552.cdh-cluster.dev/10.5.0.6:8032 17/08/18 15:59:18 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers 17/08/18 15:59:18 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (36070 MB per container) 17/08/18 15:59:18 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 17/08/18 15:59:18 INFO yarn.Client: Setting up container launch context for our AM 17/08/18 15:59:18 INFO yarn.Client: Setting up the launch environment for our AM container 17/08/18 15:59:18 INFO yarn.Client: Preparing resources for our AM container 17/08/18 15:59:19 INFO yarn.Client: Uploading resource file:/tmp/spark-0e51fc77-0ed0-42fc-99a8-e98614820f13/__spark_conf__368044468245332078.zip -> hdfs://cdh-mn-2b4cb552.cdh-cluster.dev:8020/user/akhanolk/.sparkStaging/application_1501192010062_0278/__spark_conf__.zip 17/08/18 15:59:20 INFO spark.SecurityManager: Changing view acls to: akhanolk 17/08/18 15:59:20 INFO spark.SecurityManager: Changing modify acls to: akhanolk 17/08/18 15:59:20 INFO spark.SecurityManager: Changing view acls groups to: 17/08/18 15:59:20 INFO spark.SecurityManager: Changing modify acls groups to: 17/08/18 15:59:20 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(akhanolk); groups with view permissions: Set(); users with modify permissions: Set(akhanolk); groups with modify permissions: Set() 17/08/18 15:59:20 INFO yarn.Client: Submitting application application_1501192010062_0278 to ResourceManager 17/08/18 15:59:20 INFO impl.YarnClientImpl: Submitted application application_1501192010062_0278 17/08/18 15:59:20 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1501192010062_0278 and attemptId None 17/08/18 15:59:21 INFO yarn.Client: Application report for application_1501192010062_0278 (state: ACCEPTED) 17/08/18 15:59:21 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.users.akhanolk start time: 1503071960167 final status: UNDEFINED tracking URL: http://cdh-mn-2b4cb552.cdh-cluster.dev:8088/proxy/application_1501192010062_0278/ user: akhanolk 17/08/18 15:59:22 INFO yarn.Client: Application report for application_1501192010062_0278 (state: ACCEPTED) 17/08/18 15:59:23 INFO yarn.Client: Application report for application_1501192010062_0278 (state: ACCEPTED) 17/08/18 15:59:23 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM) 17/08/18 15:59:23 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> cdh-mn-2b4cb552.cdh-cluster.dev, PROXY_URI_BASES -> http://cdh-mn-2b4cb552.cdh-cluster.dev:8088/proxy/application_1501192010062_0278), /proxy/application_1501192010062_0278 17/08/18 15:59:23 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 17/08/18 15:59:24 INFO yarn.Client: Application report for application_1501192010062_0278 (state: RUNNING) 17/08/18 15:59:24 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.5.0.8 ApplicationMaster RPC port: 0 queue: root.users.akhanolk start time: 1503071960167 final status: UNDEFINED tracking URL: http://cdh-mn-2b4cb552.cdh-cluster.dev:8088/proxy/application_1501192010062_0278/ user: akhanolk 17/08/18 15:59:24 INFO cluster.YarnClientSchedulerBackend: Application application_1501192010062_0278 has started running. 17/08/18 15:59:24 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37415. 17/08/18 15:59:24 INFO netty.NettyBlockTransferService: Server created on 10.5.0.5:37415 17/08/18 15:59:24 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/08/18 15:59:24 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.5.0.5, 37415, None) 17/08/18 15:59:24 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.5.0.5:37415 with 912.3 MB RAM, BlockManagerId(driver, 10.5.0.5, 37415, None) 17/08/18 15:59:24 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.5.0.5, 37415, None) 17/08/18 15:59:24 INFO storage.BlockManager: external shuffle service port = 7337 17/08/18 15:59:24 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.5.0.5, 37415, None) 17/08/18 15:59:24 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2ca54da9{/metrics/json,null,AVAILABLE,@Spark} 17/08/18 15:59:24 INFO scheduler.EventLoggingListener: Logging events to hdfs://cdh-mn-2b4cb552.cdh-cluster.dev:8020/user/spark/spark2ApplicationHistory/application_1501192010062_0278 17/08/18 15:59:24 INFO util.Utils: Using initial executors = 3, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 17/08/18 15:59:27 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.5.0.10:39138) with ID 3 17/08/18 15:59:27 INFO spark.ExecutorAllocationManager: New executor 3 has registered (new total is 1) 17/08/18 15:59:27 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.5.0.10:39142) with ID 1 17/08/18 15:59:27 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 2) 17/08/18 15:59:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cdh-wn-e043867b.cdh-cluster.dev:41719 with 366.3 MB RAM, BlockManagerId(3, cdh-wn-e043867b.cdh-cluster.dev, 41719, None) 17/08/18 15:59:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cdh-wn-e043867b.cdh-cluster.dev:42490 with 366.3 MB RAM, BlockManagerId(1, cdh-wn-e043867b.cdh-cluster.dev, 42490, None) 17/08/18 15:59:27 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.5.0.10:39144) with ID 2 17/08/18 15:59:27 INFO spark.ExecutorAllocationManager: New executor 2 has registered (new total is 3) 17/08/18 15:59:27 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 17/08/18 15:59:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cdh-wn-e043867b.cdh-cluster.dev:41433 with 366.3 MB RAM, BlockManagerId(2, cdh-wn-e043867b.cdh-cluster.dev, 41433, None) 17/08/18 15:59:27 INFO internal.SharedState: loading hive config file: file:/etc/spark2/conf.cloudera.spark2_on_yarn/yarn-conf/hive-site.xml 17/08/18 15:59:27 INFO internal.SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/user/hive/warehouse'). 17/08/18 15:59:27 INFO internal.SharedState: Warehouse path is '/user/hive/warehouse'. 17/08/18 15:59:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26fadd98{/SQL,null,AVAILABLE,@Spark} 17/08/18 15:59:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3db6dd52{/SQL/json,null,AVAILABLE,@Spark} 17/08/18 15:59:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@23ad2d17{/SQL/execution,null,AVAILABLE,@Spark} 17/08/18 15:59:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@25f0c5e7{/SQL/execution/json,null,AVAILABLE,@Spark} 17/08/18 15:59:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18cf5c52{/static/sql,null,AVAILABLE,@Spark} 17/08/18 15:59:28 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.1.0 using file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-logging-1.1.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-exec-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-exec.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-jdbc-1.1.0-cdh5.12.0-standalone.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-jdbc-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-jdbc-standalone.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-jdbc.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-metastore-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-metastore.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-serde-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-serde.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-service-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-service.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/libfb303-0.9.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/libthrift-0.9.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/log4j-1.2.16.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-client.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-hadoop-compat.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-hadoop2-compat.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-protocol.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-server.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/htrace-core.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/ST4-4.0.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/accumulo-core-1.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/accumulo-fate-1.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/accumulo-start-1.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/accumulo-trace-1.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/activation-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/ant-1.9.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/ant-launcher-1.9.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/antlr-2.7.7.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/antlr-runtime-3.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/apache-log4j-extras-1.2.17.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/asm-3.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/asm-commons-3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/asm-tree-3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/avro.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/bonecp-0.8.0.RELEASE.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/calcite-avatica-1.0.0-incubating.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/calcite-core-1.0.0-incubating.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/calcite-linq4j-1.0.0-incubating.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-beanutils-1.9.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-beanutils-core-1.8.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-cli-1.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-codec-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-collections-3.2.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-compiler-2.7.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-compress-1.4.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-configuration-1.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-dbcp-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-digester-1.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-el-1.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-httpclient-3.0.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-io-2.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-lang-2.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-lang3-3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-math-2.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-pool-1.5.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/commons-vfs2-2.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/curator-client-2.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/curator-framework-2.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/curator-recipes-2.6.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/datanucleus-api-jdo-3.2.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/datanucleus-core-3.2.10.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/datanucleus-rdbms-3.2.9.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/derby-10.11.1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/eigenbase-properties-1.1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/findbugs-annotations-1.3.9-1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/groovy-all-2.4.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/gson-2.2.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/guava-14.0.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hamcrest-core-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hbase-annotations.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/high-scale-lib-1.1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-accumulo-handler-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-accumulo-handler.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-ant-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-ant.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-beeline-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-beeline.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-cli-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-cli.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-common-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-contrib-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-contrib.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-hbase-handler-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-hbase-handler.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-hwi-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-hwi.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-0.23-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-0.23.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-common-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-scheduler-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims-scheduler.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-shims.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-testutils.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jamon-runtime-2.3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jackson-xc-1.9.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jackson-databind-2.2.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jackson-annotations-2.2.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/zookeeper.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/velocity-1.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/snappy-java-1.0.4.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/plexus-utils-1.5.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/paranamer-2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/oro-2.0.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/httpclient-4.2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/xz-1.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/tempus-fugit-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/super-csv-2.2.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/stax-api-1.0.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/servlet-api-2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/opencsv-2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/metrics-jvm-3.0.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/metrics-json-3.0.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/metrics-core-3.0.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/maven-scm-provider-svnexe-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/maven-scm-provider-svn-commons-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/maven-scm-api-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/mail-1.4.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/logredactor-1.0.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/junit-4.11.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jta-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jsr305-3.0.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jsp-api-2.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jpam-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/joda-time-1.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jline-2.12.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jetty-all-server-7.6.0.v20120127.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jetty-all-7.6.0.v20120127.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jersey-servlet-1.14.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jersey-server-1.14.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jdo-api-3.0.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jcommander-1.32.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jasper-runtime-5.5.23.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jasper-compiler-5.5.23.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/janino-2.7.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jackson-jaxrs-1.9.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/jackson-core-2.2.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/ivy-2.0.0-rc2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/parquet-hadoop-bundle.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/stringtemplate-3.2.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/regexp-1.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/httpcore-4.2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/../hive/lib/hive-testutils-1.1.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/activation-1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/activation.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/apacheds-i18n-2.0.0-M15.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/apacheds-i18n.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/apacheds-kerberos-codec-2.0.0-M15.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/apacheds-kerberos-codec.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/api-asn1-api-1.0.0-M20.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/api-asn1-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/api-util-1.0.0-M20.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/api-util.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/avro.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/aws-java-sdk-bundle-1.11.134.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/aws-java-sdk-bundle.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/azure-data-lake-store-sdk-2.1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/azure-data-lake-store-sdk.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-beanutils-1.9.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-beanutils-core-1.8.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-beanutils-core.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-beanutils.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-cli-1.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-cli.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-codec-1.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-codec.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-collections-3.2.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-collections.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-compress-1.4.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-compress.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-configuration-1.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-configuration.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-digester-1.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-digester.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-httpclient-3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-httpclient.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-io-2.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-io.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-lang-2.6.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-lang.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-logging-1.1.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-logging.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-math3-3.1.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-math3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-net-3.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/commons-net.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-client-2.7.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-client.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-framework-2.7.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-framework.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-recipes-2.7.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/curator-recipes.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/gson-2.2.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/gson.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/guava-11.0.2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/guava.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-annotations-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-annotations.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-auth-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-auth.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-aws-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-aws.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-azure-datalake-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-azure-datalake.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-common-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-hdfs-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-hdfs.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-app-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-app.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-common-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-core-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-core.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-jobclient.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-mapreduce-client-shuffle.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-api-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-client-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-client.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-common-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-server-common-2.6.0-cdh5.12.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/hadoop-yarn-server-common.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/htrace-core4-4.0.1-incubating.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/htrace-core4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/httpclient-4.2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/httpclient.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/httpcore-4.2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/httpcore.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-annotations-2.2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-annotations.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-core-2.2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-core.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-databind-2.2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-databind.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-jaxrs-1.8.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-jaxrs.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-xc-1.8.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jackson-xc.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/zookeeper.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xz.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xz-1.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xmlenc.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xmlenc-0.52.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xml-apis.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xml-apis-1.3.04.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xercesImpl.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/xercesImpl-2.9.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/stax-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/stax-api-1.0-2.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/snappy-java.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/snappy-java-1.0.4.1.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/slf4j-log4j12.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/slf4j-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/slf4j-api-1.7.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/servlet-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/servlet-api-2.5.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/protobuf-java.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/protobuf-java-2.5.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/paranamer.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/paranamer-2.3.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/netty.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/netty-3.10.5.Final.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/log4j.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/log4j-1.2.17.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/leveldbjni-all.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/leveldbjni-all-1.8.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jsr305.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jsr305-3.0.0.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jetty-util.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jetty-util-6.1.26.cloudera.4.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jersey-core.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jersey-core-1.9.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jersey-client.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jersey-client-1.9.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jaxb-api.jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/lib/hadoop/client/jaxb-api-2.2.2.jar 17/08/18 15:59:29 INFO session.SessionState: Created local directory: /tmp/1142bed0-6a21-4016-88a2-2268c070d3b0_resources 17/08/18 15:59:29 INFO session.SessionState: Created HDFS directory: /tmp/hive/akhanolk/1142bed0-6a21-4016-88a2-2268c070d3b0 17/08/18 15:59:29 INFO session.SessionState: Created local directory: /tmp/akhanolk/1142bed0-6a21-4016-88a2-2268c070d3b0 17/08/18 15:59:29 INFO session.SessionState: Created HDFS directory: /tmp/hive/akhanolk/1142bed0-6a21-4016-88a2-2268c070d3b0/_tmp_space.db 17/08/18 15:59:29 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. 17/08/18 15:59:29 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.1.0) is /user/hive/warehouse 17/08/18 15:59:29 INFO hive.metastore: Trying to connect to metastore with URI thrift://cdh-mn-2b4cb552.cdh-cluster.dev:9083 17/08/18 15:59:29 INFO hive.metastore: Opened a connection to metastore, current connections: 1 17/08/18 15:59:29 INFO hive.metastore: Connected to metastore. 17/08/18 15:59:29 INFO session.SessionState: Created local directory: /tmp/0e5ab2ac-30ca-4df8-bfaa-f25d8d18a0f2_resources 17/08/18 15:59:29 INFO session.SessionState: Created HDFS directory: /tmp/hive/akhanolk/0e5ab2ac-30ca-4df8-bfaa-f25d8d18a0f2 17/08/18 15:59:29 INFO session.SessionState: Created local directory: /tmp/akhanolk/0e5ab2ac-30ca-4df8-bfaa-f25d8d18a0f2 17/08/18 15:59:29 INFO session.SessionState: Created HDFS directory: /tmp/hive/akhanolk/0e5ab2ac-30ca-4df8-bfaa-f25d8d18a0f2/_tmp_space.db 17/08/18 15:59:29 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. 17/08/18 15:59:29 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.1.0) is /user/hive/warehouse 17/08/18 15:59:29 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint Exception in thread "main" java.io.FileNotFoundException: File /user/akhanolk/data/myq/parsed/myq-app-logs/to-be-compacted/flat-view-format/batch_id=*/* does not exist. at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:744) at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:110) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:805) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:801) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:801) at com.chamberlain.bda.util.CompactParsedLogs$.main(CompactParsedLogs.scala:47) at com.chamberlain.bda.util.CompactParsedLogs.main(CompactParsedLogs.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
... View more
Labels:
- Labels:
-
Apache Spark