Created 01-11-2017 02:11 PM
There after I install HDP2.5 on my server which environment is Centos 7, all service running well the dashboard show green status for them. But when I try to login hive CLI there throws an exception. Also I can sure that had use hdfs user to execute hive command.The command like this:
su hdfs --> hive --service cli
Diagnostics: Application application_1484105570599_0005 initialization failed (exitCode=255) with output: main : command provided 0 main : run as user is nobody main : requested yarn user is hdfs Requested user nobody is not whitelisted and has id 99,which is below the minimum allowed 1000 Failing this attempt. Failing the application. at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:556) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:233) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1484105570599_0005 failed 2 times due to AM Container for appattempt_1484105570599_0005_000002 exited with exitCode: -1000 For more detailed output, check the application tracking page: http://master01.office.sao.so:8088/cluster/app/application_1484105570599_0005 Then click on links to logs of each attempt. yarn-yarn-nodemanager-master01log.tar.gz
Created 01-12-2017 02:55 AM
@rguruvannagari Thank for your point out. I had check that my yarn config that found it doesn't enable using cgroups, then I turn it on. Then the cli can login and execute map reduce jobs. There some resolve steps for other guys information:
1. umount /sys/fs/cgroup/cpu,cpuacct
2. mkdir /cgroup/cpu
3. Disable the property CPU Scheduling & CPU Isolation
4. Ensure the property
yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
yarn.nodemanager.linux-container-executor.cgroups.hierarchy=/yarn
yarn.nodemanager.linux-container-executor.cgroups.mount = true
yarn.nodemanager.linux-container-executor.cgroups.mount-path=/cgroup
yarn.nodemanager.linux-container-executor.group=hadoop
5. Add the yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user property and set it to the desired user.
6. Configure the LinuxContainerExecutor to run jobs as the user submitting the job by adding property yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users and setting it to false
7. set min.user.id to a lower value in /etc/hadoop/conf/container-executor.cfg in all NodeManagers.
8. Rest Yarn service.
References:
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html
Created 01-11-2017 02:33 PM
@elkan li which user are you logged in as when launching? Can you try an su hive then launch the hive cli? It seems the issue is with the launching user
Created 01-11-2017 02:55 PM
Thank you for reply. I had try to use hive user login , the exception still show up.Also check the group there all right. I don't had any idea.
Can you find that had error message , said "main : run as user is nobody" , it very strange. So seems
Created 01-11-2017 05:44 PM
@elkan li Can you provide the output of below commands from the host you are logging into Hive:
cat /etc/group|grep hive cat /etc/group|grep nobody cat /etc/passwd|grep hive cat /etc/passwd|grep nobody
Also, please confirm above commands output is same on NodeManager hosts as well.
Created 01-11-2017 05:57 PM
Are you using cgroups in yarn? If yes, by default yarn local user or user running AM is set to "nobody" in . (Check if CPU scheduling and CPU isolation enabled in yarn configs from Ambari UI which actually sets cgroups).
In this case AM failed as nobody user ID is below 1000 which is set with property "Minimum user ID for submitting job". You can bypass this by changing nobody userID in all nodemanagers to above 1000.
Created 01-12-2017 02:55 AM
@rguruvannagari Thank for your point out. I had check that my yarn config that found it doesn't enable using cgroups, then I turn it on. Then the cli can login and execute map reduce jobs. There some resolve steps for other guys information:
1. umount /sys/fs/cgroup/cpu,cpuacct
2. mkdir /cgroup/cpu
3. Disable the property CPU Scheduling & CPU Isolation
4. Ensure the property
yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
yarn.nodemanager.linux-container-executor.cgroups.hierarchy=/yarn
yarn.nodemanager.linux-container-executor.cgroups.mount = true
yarn.nodemanager.linux-container-executor.cgroups.mount-path=/cgroup
yarn.nodemanager.linux-container-executor.group=hadoop
5. Add the yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user property and set it to the desired user.
6. Configure the LinuxContainerExecutor to run jobs as the user submitting the job by adding property yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users and setting it to false
7. set min.user.id to a lower value in /etc/hadoop/conf/container-executor.cfg in all NodeManagers.
8. Rest Yarn service.
References:
https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html