Member since
09-14-2015
41
Posts
16
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1489 | 07-11-2017 05:38 AM | |
1180 | 01-11-2017 05:38 PM | |
1262 | 09-07-2016 06:45 PM | |
1560 | 09-07-2016 06:00 PM | |
2275 | 09-06-2016 09:03 AM |
07-11-2017
05:33 PM
@Ekantheshwara Basappa
Groups and roles mapping using ldapRealm in shiro is not supported as per Zeppelin 0.6.0 version. What is your HDP version? Below is the Apache Jira: https://issues.apache.org/jira/browse/ZEPPELIN-1472 The ldap realm will be changed to 'ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm' and you can use 'ldapRealm.rolesByGroup = hdpeng: admin' for group role mapping.
... View more
07-11-2017
05:50 AM
@Abhishek Kumar Yes you would need to enable HTTPS for HDFS as well. Below is the link to follow: https://community.hortonworks.com/articles/52875/enable-https-for-hdfs.html Configuring load balancer is not required.
... View more
07-11-2017
05:38 AM
@Abhishek Kumar Below is a good HCC article link you can follow: https://community.hortonworks.com/articles/52876/enable-https-for-yarn-and-mapreduce2.html
... View more
07-11-2017
12:43 AM
@Rahul P you can try below steps: Log into your mysql db: # mysql -u root -p -h localhost
#use hive;
#drop INDEX PCS_STATS_IDX ON PART_COL_STATS; Restart the metastore.
... View more
05-04-2017
05:30 PM
1 Kudo
ISSUE: Spark Job fails with "java.lang.LinkageError: ClassCastException: attempting to castjar:file" because of a conflict between RuntimeDelegate from Jersey in yarn client libs and the copy in spark's assembly jar. ERROR: 17/05/02 17:44:25 ERROR ApplicationMaster: User class threw exception: java.lang.LinkageError: ClassCastException: attempting to castjar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.class
java.lang.LinkageError: ClassCastException: attempting to castjar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.class
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44)
at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
at com.sun.jersey.api.client.Client.init(Client.java:342)
at com.sun.jersey.api.client.Client.access$000(Client.java:118)
at com.sun.jersey.api.client.Client$1.f(Client.java:191)
at com.sun.jersey.api.client.Client$1.f(Client.java:187)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.api.client.Client.<init>(Client.java:187)
at com.sun.jersey.api.client.Client.<init>(Client.java:170)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:282)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.hive.ql.hooks.ATSHook.<init>(ATSHook.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:379)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1309)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1347)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:495)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279)
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:474)
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:624)
at org.apache.spark.sql.hive.execution.DropTable.run(commands.scala:89)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at com.ao.multiLevelLoyalty$.main(multiLevelLoyalty.scala:846)
at com.ao.multiLevelLoyalty.main(multiLevelLoyalty.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:559)
17/05/02 17:44:25 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.LinkageError: ClassCastException: attempting to castjar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/u/applic/data/hdfs7/hadoop/yarn/local/filecache/469/spark-assembly-1.6.2.2.5.3.0-37-hadoop2.7.3.2.5.3.0-37.jar!/javax/ws/rs/ext/RuntimeDelegate.class)
17/05/02 17:44:25 INFO SparkContext: Invoking stop() from shutdown hook
17/05/02 17:44:25 INFO SparkUI: Stopped Spark web UI at http://10.225.135.102:35023
17/05/02 17:44:25 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
17/05/02 17:44:25 INFO YarnClusterSchedulerBackend: Shutting down all executors
17/05/02 17:44:25 INFO YarnClusterSchedulerBackend: Asking each executor to shut down
ROOT CAUSE: This happens because of the conflict between RuntimeDelegate from Jersey in yarn client libs and the copy in spark's assembly jar. At runtime, YARN call into ATS code which needs a different version of a class and cannot find it because the version in Spark and the version in YARN have a conflict. RESOLUTION: Set below property using HiveContext: hc = new org.apache.spark.sql.hive.HiveContext(sc)
hc.setConf("yarn.timeline-service.enabled","false")
... View more
02-03-2017
05:52 PM
@Colin Cunningham You can follow below steps: Go to shiro.ini file and edit following section: 1) Under [users] section, you can put username and password you want to use for login : [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
admin = password1
maria_dev = maria_dev
2) Under [Url] section make below change: [urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/** = anon
/** = authc
3) Restart the service.
... View more
01-26-2017
09:42 PM
4 Kudos
Livy: Livy is an open source REST interface for interacting with Spark. Authorized users can launch a Spark session and submit code. Two different users can access their own private data and session, and they can collaborate on a notebook. Only the Livy server can submit a job securely to a Spark session. Steps to follow to configure livy interpreter to work with secure HDP cluster: Setup proxy for livy interpreter in core-site.xml Go to Ambari->HDFS->config->customer-core-site and add below properties:
hadoop.proxyuser.livy.groups=*
hadoop.proxyuser.livy.hosts=*
2. Configure livy interpreter in Zeppelin and add below configurations: livy.superusers=zeppelin-spark
Note - The value for livy.superusers should be your zeppelin principal. That would be zeppelin-{$Cluster_name} For example, in this case you can find it by running below command:
klist -kt /etc/security/keytabs/zeppelin.server.kerberos.keytab
Keytab name: FILE:/etc/security/keytabs/zeppelin.server.kerberos.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 11/15/16 17:33:16 zeppelin-spark@HWX.COM
1 11/15/16 17:33:16 zeppelin-spark@HWX.COM
1 11/15/16 17:33:16 zeppelin-spark@HWX.COM
1 11/15/16 17:33:16 zeppelin-spark@HWX.COM
1 11/15/16 17:33:16 zeppelin-spark@HWX.COM
zeppelin-spark will be your superuser for livy interpreter. *Make sure this will match with livy.superusers in livy-conf file. livy.impersonation.enabled=true //this configuration should also be present in livy-conf.
livy.server.access_control.enabled=true
livy.server.access_control.users=livy,zeppelin
livy.server.auth.type=kerberos
livy.server.auth.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
livy.server.auth.kerberos.principal=HTTP/spark-1.hwx.com@HWX.COM
livy.server.launch.kerberos.keytab=/etc/security/keytabs/livy.service.keytab
livy.server.launch.kerberos.principal=livy/spark-1.hwx.com@HWX.COM
Note - To configure Zeppelin with authentication for Livy you need to set the following in the interpreter settings: zeppelin.livy.principal=zeppelin-spark@HWX.COM
zeppelin.livy.keytab=/etc/security/keytabs/zeppelin.service.keytab
3. Make sure zeppelin.livy.url is pointing to hostname not IP address : zeppelin.livy.url=http://spark-3.hwx.com:8998 4. After saving configuration changes in livy interpreter, Please restart interpreter to see the affect.
... View more
Labels:
01-17-2017
11:51 PM
@Christian Guegi You can go with manual upgrade of cluster and can upgrade kafka brokers one by one: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_command-line-upgrade/content/upgrade-kafka-23.html
... View more
01-11-2017
07:01 PM
@Dezka Dex Can you upload new stack trace?
... View more
01-11-2017
06:01 PM
@Dezka Dex The error you are getting is : Caused by: java.net.SocketException: Connection reset
Failed to connect to KDC - Failed to communicate with the Active Directory at LDAP://hq.domain.com/OU=Production,OU=domain,DC=hq,DC=domain,DC=com: simple bind failed: hq.domain.com:389 Above error indicates communication failure with AD, but you mentioned KDC test passes? Can you make sure you are using correct communication string? Can you run ldapsearch with it? Also, Have you followed below doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_Ambari_Security_Guide/content/_configure_ambari_to_use_ldap_server.html Can you upload your krb5.conf?
... View more