Created 02-26-2016 02:51 AM
I enabled kerberos on HDP 2.3.2 cluster using ambari 2.1.2.1 and then tried to run map reduce job on the edge node as a local user but the job failed:
Error Message:
Diagnostics: Application application_1456454501315_0001 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is xxxxx
main : requested yarn user is xxxxx
User xxxxx not found Failing this attempt. Failing the application. 16/02/25 18:42:28 INFO mapreduce.Job: Counters: 0 Job Finished in 7.915 seconds
My understanding is that we don't need the edge node local user anywhere else.. but I am not sure why my map reduce job is failing due to the user not being there on other nodes. please help
example mapreduce job:
XXXXX:~#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.1.2.3.2.0-2950.jar pi 16 100000
Created 02-26-2016 11:19 PM
jobs are running fine after i added the user to hadoop group on all the nodes .. but i am not sure adding the user account to the hadoop group would be a good idea ..
Created 02-26-2016 04:34 AM
My understanding is that in case of kerberos enabled cluster users/principal is required to be present on all the nodes.
Refer this https://community.hortonworks.com/questions/15160/adding-a-new-user-to-the-cluster.html
Created 02-26-2016 07:04 AM
@rbalam when your hadoop cluster integrated with Kerberos security then authenticated user must exist in the every node where the task runs. Refer link which already shared by "rahul pathak"
Created 02-26-2016 11:58 AM
Could you please confirm this again? if i need to have users on all the nodes in the cluster to run jobs successfully.. i could end up with quite a few users on all the nodes which may become a maintenance head-ache down the line ..
Created 02-26-2016 06:33 PM
@rbalam, yes I am sure, its will get resolve after adding user on other nodes also. Now first you have to resolve this issue then we can think on other problems.
First time you can try with one user which is exists on every node and see the output of mapreduce job.
You have to use centralized LDAP/Directory along with Kerberos server for user management to reduce maintenance head-ache.
Created 02-26-2016 07:41 AM
Created 02-26-2016 11:46 AM
I ran the job again and tried to get yarn logs .... here is what i see
xxxxx:~#yarn logs -applicationId application_1456457210711_0002 16/02/26 03:44:26 INFO impl.TimelineClientImpl: Timeline service address: http://yarntimelineserveraddress:8188/ws/v1/timeline/ /app-logs/xxxxx/logs/application_1456457210711_0002 does not have any log files.
Here is what I see on the ResourceManager UI
Application application_1456457210711_0002 failed 2 times due to AM Container for appattempt_1456457210711_0002_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://resourcemanageruri:8088/cluster/app/application_1456457210711_0002Then, click on links to logs of each attempt.
Diagnostics: Application application_1456457210711_0002 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is xxxxx
main : requested yarn user is xxxxx
User xxxxx not found Failing this attempt.
Failing the application.
Created 02-26-2016 07:06 PM
I created the user on all the nodes but the job is still failing with the following output
xxxxx:/#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000 /user/xxxxx/teraout8 16/02/26 10:52:18 INFO impl.TimelineClientImpl: Timeline service address: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:18 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 37 for rbalam on ha-hdfs:testnnhasvc 16/02/26 10:52:19 INFO security.TokenCache: Got dt for hdfs://testnnhasvc; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 16/02/26 10:52:20 INFO terasort.TeraSort: Generating 10000 using 2 16/02/26 10:52:21 INFO mapreduce.JobSubmitter: number of splits:2 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456512672399_0001 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:24 INFO impl.YarnClientImpl: Submitted application application_1456512672399_0001 16/02/26 10:52:24 INFO mapreduce.Job: The url to track the job: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:24 INFO mapreduce.Job: Running job: job_1456512672399_0001 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 running in uber mode : false 16/02/26 10:52:29 INFO mapreduce.Job: map 0% reduce 0% 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 failed with state FAILED due to: Application application_1456512672399_0001 failed 2 times due to AM Container for appattempt_1456512672399_0001_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://timlineserveruri:8088/cluster/app/application_1456512672399_0001Then, click on links to logs of each attempt.
Diagnostics: Application application_1456512672399_0001 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is xxxxx
main : requested yarn user is xxxxx
Failing this attempt. Failing the application. 16/02/26 10:52:29 INFO mapreduce.Job: Counters: 0
Created 02-26-2016 07:45 PM
@rbalam, your previous problem is resolved "User xxxxx not found Failing this attempt". here the containers are not launching but it should show a reason why, so you have debug yarn log. Usually this problems comes when you have different JAVA versions, classpath is not properly set or directory permission.
Created 02-26-2016 07:52 PM
yes. user not found issue is gone after i created the user on all the nodes. Do you know where I can look for which classpath/jars that has permissions issue?