Member since
04-23-2018
23
Posts
0
Kudos Received
0
Solutions
05-06-2019
09:06 PM
@Geoffrey Shelton Okot My cluster is enabled with kerberos using Our AD server as KDC , and kerberos was enabled using ambari auto widzard I am not able to locate the below files anywhere is my cluster Server, (os is Redhat Linux7.5)
... View more
05-06-2019
08:21 PM
My question is 1)How to run Hive query in %jdbc(hive) interpreter which will use LLAP , right now it run with TEZ 2)Can i define YARN queqe in %JDBC interprter
... View more
Labels:
05-06-2019
08:15 PM
@Geoffrey Shelton Okot sir , I dont find the below files in my cluster kadm5.acl kdc.conf Thanks for your quick reply sir Regards
... View more
05-06-2019
08:14 PM
I had pasted the shiro.ini file , let me know if you need the complete file and any log file as well,I am happy to provide
... View more
05-06-2019
08:12 PM
@Geoffrey Shelton Okot Just for your reference, If i set" livy.impersonation.enabled " as false eveything is working with kerberos. But i want to enable Livy impersonation as true so the user from my organization can use zeppelin and knime to submit spark jobs by logging to zeppelin with their own domain credential and avoid manage user access indivusually. my version details are : HDP2.6 Kerberos enable through AD spark 2.3.0 , Zeppelin 0.7.3 Zeppelin is enable with LDAP config for user auth Knox is enables Custom-core.site file is updated with * Appreciate if some one help to fix Livy user impersonation issue , below is the error in zeppelin
... View more
05-06-2019
08:02 PM
@Geoffrey Shelton Okot I am login as my user(sameer.dalai) in zeppelin which is authanticted via LDAP to run livy2.pyspark Please find my krb5.conf Shiri ini file
... View more
05-05-2019
03:56 AM
Can you try livy.spark.master as YARN
... View more
05-05-2019
03:47 AM
Can you please let us know which spark and zeppelin version you are using? Is Kerberos is enabled in you cluster ? Below are few things to check to resolve this issue : 1)Make sure you configured livy.host and livy.group in custom-core.site (hdfs->Custom-core.site) Make sure they are configure to * for livy 2)Also you need to configure livy.superuser in your Livy interpreter too. 3)Yarn Queue access ? 4)if ranger is enabled , please make sure you have access to YARN queqe Hope this helps
... View more
05-05-2019
03:27 AM
Check if ambari agent is running in that datanode? you can go to ambari-host and check the host status of datanode2 .
... View more
05-05-2019
03:25 AM
Since you have 64GB , It is not bas idea to use Hive LLAP queqe in YARN as it process in memory.
... View more
05-05-2019
03:18 AM
check the ambari server process from linux terminal # service ambari-server status #service ambari-agent status If both the process are running , enter http://ambari-server-hostnam:8080 to get the ambari login prompt If this helps , please accept the answer
... View more
05-04-2019
01:55 AM
I am also having the similar problem , My user login into Zeppelin notenook and authenticated by LDAP . After that they want to use 1)%livy2.pyspark 2)%livy2.sql with their login user no zeppelin. But when i enable user impersonation in livy, It fails My error is org.apache.zeppelin.livy.LivyException: {"msg":"User 'zeppelin-goa_datalake_1' not allowed to impersonate 'Some(sameer.dalai)'."}
org.springframework.web.client.HttpClientErrorException: 403 Forbidden
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:667)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:620)
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecuteSubject(KerberosRestTemplate.java:202)
at org.springframework.security.kerberos.client.KerberosRestTemplate.access$100(KerberosRestTemplate.java:67)
at org.springframework.security.kerberos.client.KerberosRestTemplate$1.run(KerberosRestTemplate.java:191)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360) How do i ensure to enable data scientist to use zeppelin to submit spark jobs using livy interpreter ?
... View more
05-04-2019
01:53 AM
@Geoffrey Shelton Okot This is my interperter setting I am login as a user sameer.dalai authanticated by LDAP from in zeppelin No sssd in linux host custom-core site is all * Livy2 user impersonation is enabled Note :- I am now getting an error " %livy2.pyspark # org.apache.zeppelin.livy.LivyException: {"msg":"User 'zeppelin-goa_datalake_1' not allowed to impersonate 'Some(sameer.dalai)'."} org.springframework.web.client.HttpClientErrorException: 403 Forbidden at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:91) at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:667) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:620) at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecuteSubject(KerberosRestTemplate.java:202) at org.springframework.security.kerberos.client.KerberosRestTemplate.access$100(KerberosRestTemplate.java:67) at org.springframework.security.kerberos.client.KerberosRestTemplate$1.run(KerberosRestTemplate.java:191) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360)
... View more
05-01-2019
08:28 PM
@Geoffrey Shelton Okot Hi Sir, I have HDP2.6 cluster Kerberos Enabled Zeppelin is integrated with LDAP for auth Proxy setting is set correctly Problem statement is _ Livy user impersonation is not working Trying to run from zeppelin ------------ %livy2.spark sc.version ---------- Error : javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.springframework.security.kerberos.client.KerberosRestTemplate.doExecute(KerberosRestTemplate.java:185)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:498)
example.com
... View more
04-30-2019
11:04 PM
@Geoff Foote @Geoffrey Shelton Okot Any update on this issue yet, I am having the same issue
... View more
02-14-2019
04:55 AM
I have a HDP 3.1.0 cluster , all the services are running fine except Atlas , when i start atlas through ambari , it failed to start , below are the error , please suggest the resolution Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/phoenix/phoenix-5.0.0.3.1.0.0-78-server.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
atlas_janus
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
0 row(s)
Took 1.3657 seconds
Took 11.3543 secondsjava exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2966)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1933)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:600)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ATLAS/package/scripts/metadata_server.py", line 254, in <module>
MetadataServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ATLAS/package/scripts/metadata_server.py", line 102, in start
user=params.hbase_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/phoenix/phoenix-5.0.0.3.1.0.0-78-server.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
atlas_janus
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
0 row(s)
Took 1.0039 seconds
Took 10.4210 secondsjava exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2966)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1933)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:600)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
... View more
02-12-2019
12:00 AM
any update on this issue, I still have the same issue
... View more
01-16-2019
02:24 AM
how did you fixed this
... View more
11-24-2018
10:22 PM
I am not able to find a way to sync all my HDFS directory path in Atlas in HDP 2.6 cluster , is there any way
... View more
Labels:
11-24-2018
10:16 PM
Please refer to this website if this helps https://atlas.apache.org/
... View more
04-24-2018
01:09 PM
Hi thanks for your replay Please find the complete application log The Current CDH version in the cluster is - 5.14.0-1.cdh.5.14..0.p0.24 This is a sandbox cluster and client is using this and there are data in the hdfs so i cant create the cluster again Currently as a option - i am planning to add the gateway node through cloudera manager add host widzard , will that work or i will still face the same issue Just to inform you - In Cloudera manager It also download the latest CDH version 5.14.2-1.cdh.5.14.2.p0.3 (but is not distributed and activated yet )
... View more
04-23-2018
02:26 PM
Hi , I have a running cluster with 3 data node and 1 master node in Azure I tried to add the first gateway node through cloudera director web ui . ->I have seen node get created successfully created in azure ->Started and attached in Cloudera manager ->Streamset , Apark and cDH parcel also get distributed ->cloudera manager agent installed In the last step is get failed with error message "" also director rolback the gateway node and deleted the node from azure . As per the Log ->Edge Node get created and also attached to the Cloudera manager ->Streamsets CDH and Spark 2 parcel also get distributed and activated [2018-04-19 20:29:46.212 +0000] INFO [p-619efeab757f-DefaultUpdateClusterJob] a230bff5-b6be-4e08-b652-f72041edb03b PUT /api/v11/environments/HDMI_TEST/deployments/SANDBOX/clusters/sandboxcluster com.cloudera.launchpad.bootstrap.cluster.UnboundedWaitForParcelStage - c.c.l.b.c.UnboundedWaitForParcelStage: Parcel (STREAMSETS_DATACOLLECTOR, 3.1.0.0) stage is ACTIVATED as expected and stable 2018-04-19 20:29:54.462 +0000] INFO [p-619efeab757f-DefaultUpdateClusterJob] a230bff5-b6be-4e08-b652-f72041edb03b PUT /api/v11/environments/HDMI_TEST/deployments/SANDBOX/clusters/sandboxcluster com.cloudera.launchpad.bootstrap.cluster.UnboundedWaitForParcelStage - c.c.l.b.c.UnboundedWaitForParcelStage: Parcel (SPARK2, 2.2.0.cloudera2-1.cdh5.12.0.p0.232957) stage is ACTIVATED as expected and stable [2018-04-19 20:29:50.337 +0000] INFO [p-619efeab757f-DefaultUpdateClusterJob] a230bff5-b6be-4e08-b652-f72041edb03b PUT /api/v11/environments/HDMI_TEST/deployments/SANDBOX/clusters/sandboxcluster com.cloudera.launchpad.bootstrap.cluster.UnboundedWaitForParcelStage - c.c.l.b.c.UnboundedWaitForParcelStage: Parcel (CDH, 5.14.0-1.cdh5.14.0.p0.24) stage is ACTIVATED as expected and stable After that it fails with error . [2018-04-19 20:29:55.812 +0000] WARN [p-633ca3a297f0-ApplyHostTemplatesPerHostJob] a230bff5-b6be-4e08-b652-f72041edb03b PUT /api/v11/environments/HDMI_TEST/deployments/SANDBOX/clusters/sandboxcluster com.cloudera.launchpad.bootstrap.cluster.hostTemplate.CreateAndApplyHostTemplateJob$ApplyHostTemplatesJob - c.c.l.b.c.h.CreateAndApplyHostTemplateJob: Bad request exception when applying host template com.cloudera.api.ext.ClouderaManagerException: API call to Cloudera Manager failed. Method=HostTemplatesResource.applyHostTemplate. Response Status Code: 400. Message: { "message" : " Host must have a single version of CDH installed ." }. - Cause: javax.ws.rs.BadRequestException HTTP 400 Bad Request I am not able to again spin the edge node as director doesn’t give me an option again to add Nodes as cluster is n “update request failed “ in director . However existing cluster is running fine and the new edge node get automatically deleted by director . Now at this stage i am not able to add the gateay node again in the cluster thorugh director as the status of the cluster in director is update failed and i am not getting modify cluster status to add the gateway node
... View more
Labels: