Member since
09-30-2015
83
Posts
57
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15394 | 02-26-2016 11:19 PM |
02-26-2016
11:19 PM
1 Kudo
jobs are running fine after i added the user to hadoop group on all the nodes .. but i am not sure adding the user account to the hadoop group would be a good idea ..
... View more
02-26-2016
07:52 PM
1 Kudo
yes. user not found issue is gone after i created the user on all the nodes. Do you know where I can look for which classpath/jars that has permissions issue?
... View more
02-26-2016
07:06 PM
@Vikas Gadade I created the user on all the nodes but the job is still failing with the following output xxxxx:/#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 10000 /user/xxxxx/teraout8 16/02/26 10:52:18 INFO impl.TimelineClientImpl: Timeline service address: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:18 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 37 for rbalam on ha-hdfs:testnnhasvc 16/02/26 10:52:19 INFO security.TokenCache: Got dt for hdfs://testnnhasvc; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 16/02/26 10:52:20 INFO terasort.TeraSort: Generating 10000 using 2 16/02/26 10:52:21 INFO mapreduce.JobSubmitter: number of splits:2 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456512672399_0001 16/02/26 10:52:22 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:testnnhasvc, Ident: (HDFS_DELEGATION_TOKEN token 37 for rbalam) 16/02/26 10:52:24 INFO impl.YarnClientImpl: Submitted application application_1456512672399_0001 16/02/26 10:52:24 INFO mapreduce.Job: The url to track the job: http://timelineuri:8188/ws/v1/timeline/ 16/02/26 10:52:24 INFO mapreduce.Job: Running job: job_1456512672399_0001 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 running in uber mode : false 16/02/26 10:52:29 INFO mapreduce.Job: map 0% reduce 0% 16/02/26 10:52:29 INFO mapreduce.Job: Job job_1456512672399_0001 failed with state FAILED due to: Application application_1456512672399_0001 failed 2 times due to AM Container for appattempt_1456512672399_0001_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://timlineserveruri:8088/cluster/app/application_1456512672399_0001Then, click on links to logs of each attempt. Diagnostics: Application application_1456512672399_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx Failing this attempt. Failing the application. 16/02/26 10:52:29 INFO mapreduce.Job: Counters: 0
... View more
02-26-2016
11:58 AM
1 Kudo
Could you please confirm this again? if i need to have users on all the nodes in the cluster to run jobs successfully.. i could end up with quite a few users on all the nodes which may become a maintenance head-ache down the line ..
... View more
02-26-2016
11:46 AM
1 Kudo
@Neeraj Sabharwal I ran the job again and tried to get yarn logs .... here is what i see xxxxx:~#yarn logs -applicationId application_1456457210711_0002
16/02/26 03:44:26 INFO impl.TimelineClientImpl: Timeline service address: http://yarntimelineserveraddress:8188/ws/v1/timeline/
/app-logs/xxxxx/logs/application_1456457210711_0002 does not have any log files. Here is what I see on the ResourceManager UI Application application_1456457210711_0002 failed 2 times due to AM Container for appattempt_1456457210711_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://resourcemanageruri:8088/cluster/app/application_1456457210711_0002Then, click on links to logs of each attempt. Diagnostics: Application application_1456457210711_0002 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx User xxxxx not found
Failing this attempt. Failing the application.
... View more
02-26-2016
02:51 AM
2 Kudos
I enabled kerberos on HDP 2.3.2 cluster using ambari 2.1.2.1 and then tried to run map reduce job on the edge node as a local user but the job failed: Error Message: Diagnostics: Application application_1456454501315_0001 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is xxxxx main : requested yarn user is xxxxx User xxxxx not found
Failing this attempt. Failing the application.
16/02/25 18:42:28 INFO mapreduce.Job: Counters: 0
Job Finished in 7.915 seconds My understanding is that we don't need the edge node local user anywhere else.. but I am not sure why my map reduce job is failing due to the user not being there on other nodes. please help example mapreduce job: XXXXX:~#yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.1.2.3.2.0-2950.jar pi 16 100000
... View more
Labels:
- Labels:
-
Apache Hadoop
02-16-2016
04:10 AM
3 Kudos
The agent which is running on ambari-server node is not able to create pid file but when we check the process, the agent process is running in the back ground.However when we check the status using ambari-agent status.. it is reporting as the agent NOT running. The reason is because there is no pid file created when we tried to start the ambari-agent. Could you please help us identify the reason why it is not create PID file? We have ambari 2.2.0 running on RH 6.6 and ambari-server and ambari-agent are running as root.
... View more
Labels:
- Labels:
-
Apache Ambari
02-11-2016
04:40 PM
1 Kudo
I followed the same steps on another cluster where we dont have hue https and kerberos and it is working as expected there. So i think there is a problem with either https and/or kerberos settings.
... View more
02-11-2016
04:12 PM
1 Kudo
Hi @Neeraj Sabharwal I followed the same steps and restarted httpfs and hue services but when I try to access hue filebrowser it is throwing exceptions. The only difference in this environment is hue is running on https and cluster is kerberized but not sure if it makes any difference. Can you please let me know how to trouble shoot this issue? WebHdfsException at /filebrowser/ StandbyException: Operation category READ is not supported in state standby (error 403)
Request Method: GET Request URL: https://falbdcdd0001v:8000/filebrowser/ Django Version: Exception Type: WebHdfsException Exception Value: StandbyException: Operation category READ is not supported in state standby (error 403)
Exception Location: /usr/lib/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py in _stats, line 205
Python Executable: /usr/bin/python2.6
Python Version:
... View more
02-10-2016
10:17 PM
1 Kudo
I have a follow up question on this. Lets say I removed all the users from Ranger which were synced from a local unix server and then re-configured to sync users from an AD domain/group. In this case, do II need to create "hive" user on that particular AD group before I can create a policy to let hive queries run as hive user instead of end users on the cluster? what about other service accounts like mapred, yarn etc .. do I need to create all those accounts on AD? please advise.
... View more