Created 02-08-2016 10:34 AM
Say users are allowed to access a cluster from the edge node of a cluster. If the user wants to run jobs on the cluster, does the user should have his account on all the nodes of the cluster or just having an account on the edge node is enough?
Created 02-09-2016 02:30 AM
No. User should not have account on all the nodes of the cluster. He should only have account on edge node.
For a new user there are 2 types are directories we need to create before the user access the cluster. 1- User home directory [directory created on Linux Filesystem ie. /home/<username>] 2- User HDFS directory [directory created on HDFS filesystem ie. /user/<username>]
As per neeraj, you only need to create HDFS home directory[ie. /user/<username>] on edge node. You can still run jobs with the new user on cluster, even if you havent created his home directory in linux.
==============
Below are 2 scenarios -
a. I added new user on edge node using command - #useradd <username> Before launching job on cluster, i need to create hdfs directory for user #sudo -u hdfs hadoop fs -mkdir </user/{username}> #sudo -u hdfs hadoop fs chown -R <username>:<grp_name> </user/{username}>
b. If the user is coming from ldap server, then you only need to make your edge node as ldap client and create a directory in HDFS using below command -
#sudo -u hdfs hadoop fs -mkdir </user/{username}> #sudo -u hdfs hadoop fs chown -R <username>:<grp_name> </user/{username}>
Let me know if this clears, what you are looking for.
Created 02-09-2016 02:30 AM
No. User should not have account on all the nodes of the cluster. He should only have account on edge node.
For a new user there are 2 types are directories we need to create before the user access the cluster. 1- User home directory [directory created on Linux Filesystem ie. /home/<username>] 2- User HDFS directory [directory created on HDFS filesystem ie. /user/<username>]
As per neeraj, you only need to create HDFS home directory[ie. /user/<username>] on edge node. You can still run jobs with the new user on cluster, even if you havent created his home directory in linux.
==============
Below are 2 scenarios -
a. I added new user on edge node using command - #useradd <username> Before launching job on cluster, i need to create hdfs directory for user #sudo -u hdfs hadoop fs -mkdir </user/{username}> #sudo -u hdfs hadoop fs chown -R <username>:<grp_name> </user/{username}>
b. If the user is coming from ldap server, then you only need to make your edge node as ldap client and create a directory in HDFS using below command -
#sudo -u hdfs hadoop fs -mkdir </user/{username}> #sudo -u hdfs hadoop fs chown -R <username>:<grp_name> </user/{username}>
Let me know if this clears, what you are looking for.
Created 02-09-2016 06:35 AM
Thanks @Sagar Shimpi, i got a clear picture. one more question, what permission do you give for the root directory / on hdfs
Created 02-09-2016 07:01 AM
The root directory "/" permissions are 755[ie. rwxr-xr-x], by default these permissions are as per linux standards [ie umask].Umask for hdfs user is "022" And the owner and group are set to hdfs:hdfs.
Created 02-16-2016 08:59 AM
This is right in beginner setup, when your hadoop cluster integrated with Kerberos security then authenticated user must exist in the every node where the task runs.
Created 02-17-2016 04:26 AM
Hi @Vikas Gadade - I think this is not the case. Even if you have kerberized cluster you still have only user added on Gateway/Client node. Make sure you have proper keyabs in place.
Hadoop Service always use Delegation token to nodes and access/execute jobs within kerberized cluster where it executes task.
Created 02-18-2016 06:53 AM
@Sagar Shimpi @ARUNKUMAR RAMASAMY I agree with @Vikas Gadade, if you want to execute jobs with your user account, you have to make sure the user is available on every Nodemanager node!
Please see this => "YARN containers in a secure cluster use the operating system facilities to offer execution isolation for containers. Secure containers execute under the credentials of the job user. The operating system enforces access restriction for the container. The container must run as the user that submitted the application." more info => https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/SecureContainer.html
Created 02-17-2016 04:51 AM
Hi @Sagar Shimpi, @Vikas Gadade may be correct. This is the info i got from one of the hadoop admins.
Can someone clarify or validate
In non-security mode (without kerberos), an account on edgenode is sufficient because user's container will run on slave nodes under the yarn account. In secure mode (with kerberos), you should have accounts on all machines, because user's container should run on slave nodes under the real username.
Created 02-18-2016 07:00 AM
please see my comment above.
In secure mode you need local user accounts on all Nodemanager nodes
Created 06-28-2017 06:53 AM
Thank you for clarify. Is there any workaround for that? Or is it fixed in HDP2.6? I also use SSSD + Kerberos only on management nodes. On Nodemanager hosts AD users does not exists thus YARN is not working.