Member since
11-07-2019
71
Posts
1
Kudos Received
0
Solutions
02-04-2020
02:10 PM
Hello 🙂 I have the same issue! I have integrated the edge node with Active directory users could connect and submit theirs jobs to yarn before enabling Kerberos on the cluster. Actually I have used samba on edge node to create users folders and get information about users Now I configured the Kerberos and so I am getting the same error user1 not found , user1 is in AD should I now add this user with normal command add user on alll nodes ? How could it be as AD user and not local one ? I did not configured samba on others nodes may I do it ? thanks a lot in advance
... View more
02-04-2020
09:17 AM
Hello Please how did u add users? Actually i am using the active directory users and I just add them into Edge node using samba + kerberos Now I have enabled kerberos on the hadoop hortonworks cluster => I got the same issue as yours So may I add the same user to all nodes? adduser? which group? how could it be resolved as an AD user? Thanks
... View more
02-04-2020
08:55 AM
when i updated the yarn conf yarn.scheduler.capacity.root.default.acl_administer_jobs=yarn,*,user1 yarn.scheduler.capacity.root.default.acl_administer_queue=yarn,*,user1 yarn.scheduler.capacity.root.default.acl_submit_applications=yarn,yarn-ats,*,user1 => user1 is able to authenticate However he is getting this error now Error in rxRemoteExecute(computeContext, shellCmd, schedulerJobInstance) : /var/RevoShare/user1/cluster-127006E48F49439EA1A090A78C9851C9/start-job.sh: line 97: export: `-Xrs': not a valid identifier /var/RevoShare/fcuni001/cluster-127006E48F49439EA1A090A78C9851C9/start-job.sh: line 97: export: `-Xss4m': not a valid identifier ERROR: Fail to execute spark-submit. Last 20 lines' log: at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925) at com.microsoft.scaler.spark.api.SparkApp$.main(SparkApp.scala:28) at com.microsoft.scaler.spark.api.SparkApp.main(SparkApp.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.re On the namenode web interface Application application_1580827205892_0001 failed 2 times due to AM Container for appattempt_1580827205892_0001_000002 exited with exitCode: -1000 Failing this attempt.Diagnostics: [2020-02-04 15:43:08.042]Application application_1580827205892_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is fcuni001 main : requested yarn user is user1 User user1 not found For more detailed output, check the application tracking page: http://namenode:8088/cluster/app/application_1580827205892_0001 Then click on links to logs of each attempt. . Failing the application. Any idea please? Thanks a lot
... View more
02-04-2020
05:40 AM
Dear Community,
After enabling Kerberos with Active Directory on HortonWorks Hadoop Cluster (Ambari), users are unable to submit jobs on yarn
Error
Error in rxRemoteExecute(computeContext, shellCmd, schedulerJobInstance) : /var/RevoShare/aduser/cluster-7B39D0A894BC4F73ABC73D192697AFC3/start-job.sh: line 97: export: `-Xrs': not a valid identifier /var/RevoShare/aduser/cluster-7B39D0A894BC4F73ABC73D192697AFC3/start-job.sh: line 97: export: `-Xss4m': not a valid identifier ERROR: Fail to execute spark-submit. Last 20 lines' log: at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) Caused by: org.apache.hadoop.security.AccessControlException: User aduser does not have permission to submit application_1580742122197_0003 to queue default at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:429) ... 12 more at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1444) at org.ap
I set these properties like
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn,yarn-ats,*
yarn.scheduler.capacity.root.acl_submit_applications=yarn,ambari-qa,*
Please advice
Asma
... View more
Labels:
- Labels:
-
Apache YARN
02-03-2020
09:23 AM
should i create principle for each user in the AD ? We are using active directory users? If yes how so? Many thanks Asma
... View more
02-03-2020
08:55 AM
Actually, for more details: In my ambari server machine I have this ticket: [root@ambariserver ~]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: spark-analytics_hadoop@REALM.COM Valid starting Expires Service principal 02/03/2020 13:31:21 02/03/2020 23:31:21 krbtgt/REALM.COM@REALM.COM renew until 02/10/2020 13:31:21 When i connect with spark user : HADOOP_ROOT_LOGGER=DEBUG,console /usr/hdp/3.1.4.0-315/spark2/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://Edgenode:7077 --num-executors 4 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/3.1.4.0-315/spark2/examples/jars/spark-examples_2.11-2.3.2.3.1.4.0-315.jar => OK Now if I connect from the Edge Node [root@EdgeNode~]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: spark/EdgeNode@REALM.COM Valid starting Expires Service principal 02/03/2020 16:52:12 02/04/2020 02:52:12 krbtgt/REALM.COM@REALM.COM renew until 02/10/2020 16:52:12 But when I connect with user spark HADOOP_ROOT_LOGGER=DEBUG,console /usr/hdp/3.1.4.0-315/spark2/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://Edgenode:7077 --num-executors 4 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/3.1.4.0-315/spark2/examples/jars/spark-examples_2.11-2.3.2.3.1.4.0-315.jar => I got error : 20/02/03 17:53:01 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@69cac930{/metrics/json,null,AVAILABLE,@Spark} 20/02/03 17:53:01 WARN Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 20/02/03 17:53:01 ERROR SparkContext: Error initializing SparkContext. java.io.IOException: DestHost:destPort NameNode:8020 , LocalHost:localPort EdgeNode/10.48.142.32:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:4 Did I miss something please? Users from their laptop launch this commands cluster = RxSpark(sshHostname = "EdgeNode", sshUsername = "username") rxSetComputeContext(cluster) source = c("~/AirlineDemoSmall.csv") dest_file = "/share" rxHadoopMakeDir(dest_file) They are getting thr same issue On all node cluster hdfs dfs -ls / is working well Please advise Thanks Asma
... View more
02-03-2020
06:23 AM
Thanks a lot Now the problem for hdfs is fixed however when i try to launch a script from an edge node , i am getting the same issue /usr/hdp/3.1.4.0-315/spark2/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://edgenode.servername:7077 --num-executors 4 --driver-memory 512m --executor-memory 512m --executor-cores 1 /usr/hdp/3.1.4.0-315/spark2/examples/jars/spark-examples_2.11-2.3.2.3.1.4.0-315.jar Results : 20/02/03 15:13:41 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200203151341-0000/79 is now RUNNING 20/02/03 15:13:41 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@69cac930{/metrics/json,null,AVAILABLE,@Spark} 20/02/03 15:13:42 WARN Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] 20/02/03 15:13:42 ERROR SparkContext: Error initializing SparkContext. java.io.IOException: DestHost:destPort namenode.servername:8020 , LocalHost:localPort edgenodeaddress:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1502) at org.apache.hadoop.ipc.Client.call(Client.java:1444) at org.apache.hadoop.ipc.Client.call(Client.java:1354) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1660) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1577) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1574) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1589) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100) at org.apache.spark.SparkContext.<init>(SparkContext.scala:522) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2498) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:758) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:721) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:814) at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:411) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1559) at org.apache.hadoop.ipc.Client.call(Client.java:1390)
... View more
01-31-2020
08:31 AM
Thanks a lot 🙂 I have configured the Cluster with Kerberos using Active Directory but i got some issues when connecting [root@server keytabs]# hdfs dfs -ls / 20/01/31 16:31:19 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] ls: DestHost:destPort namenode:8020 , LocalHost:localPort ambari/ip:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] Any idea please? looks like the 8020 ports is also blocked Thanks Asma
... View more
01-30-2020
09:27 AM
thak you for your help I tried to restart th ambari server but in vain . I got this error 2020-01-30 18:20:21,866 INFO [main] KerberosChecker:64 - Checking Ambari Server Kerberos credentials. 2020-01-30 18:20:22,052 ERROR [main] KerberosChecker:120 - Client not found in Kerberos database (6) 2020-01-30 18:20:22,052 ERROR [main] AmbariServer:1119 - Failed to run the Ambari Server org.apache.ambari.server.AmbariException: Ambari Server Kerberos credentials check failed. Check KDC availability and JAAS configuration in /etc/ambari-server/conf/krb5JAASLogin.conf at org.apache.ambari.server.controller.utilities.KerberosChecker.checkJaasConfiguration(KerberosChecker.java:121) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1110) ht JAASLogin is configured like this com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/ambariservername.keytab" principal="ambariservername@REALM.COM" storeKey=true useTicketCache=false; }; I tried to follow these links https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/authentication-with-kerberos/content/kerberos_optional_install_a_new_mit_kdc.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/authentication-with-kerberos/content/set_up_kerberos_for_ambari_server.html Any suggestion please? 😞 Thanks
... View more
01-27-2020
08:48 AM
Dear community I have installed a hadoop cluster on 8 servers using Ambari Hortonworks. I am able to access webhdfs using the ip address and the default port 50070 without authentication. How can I secure Webhdfs? P.S I did not enable using kerberos in Ambari > Enable kerberos , should I do it? Any suggestion will be appreciated Thanks Asma
... View more
Labels: