Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2874 | 06-30-2017 05:30 PM | |
3712 | 06-30-2017 02:57 PM | |
3138 | 05-30-2017 07:00 AM | |
3605 | 01-20-2017 10:18 AM | |
7823 | 01-11-2017 02:11 PM |
03-16-2016
11:32 AM
1 Kudo
Hi @Chris Nauroth It worked for me. My mistake was i was putting the value of dfs.permissions.superusergroup as comma separated values. Now i remove the default value "hdfs" and replaced with my new group and it worked. 1 final question - I see every time i replace new group to property "dfs.permissions.superusergroup" , who ever are the users within that groups are superusers now. For example - group1- hdfs1 (user - test1) group2- hdfs2 (user - test2) group3- hdfs3 (user - test3) 1st time i had the value of "dfs.permissions.superusergroup=hdfs1", and restarted hdfs. User "test1" was assigned/given rights as superuser. 2nd time i had the value of "dfs.permissions.superusergroup=hdfs2", and restarted hdfs. User "test2" was assigned/given rights as superuser. 3rd time i had the value of "dfs.permissions.superusergroup=hdfs3", and restarted hdfs. User "test3" was assigned/given rights as superuser. Thus users(test1, test2, and test3) are now acting as my superusers and has same privileges as hdfs. So now if i want to revoke the rights what is the way for that ?
... View more
03-15-2016
09:27 AM
3 Kudos
I have HDFS filesystem as below - ------ # sudo -u hdfs hadoop fs -ls / dr-------- - hdfs hdfs 0 2016-03-09 15:14 /test1 drwxr-xr-x - bat hdfs 0 2016-03-09 15:10 /bat drwxr-xr-x - hdfs hdfs 0 2016-03-06 11:25 /hdp drwxr-xr-x - mapred hdfs 0 2016-03-06 11:25 /mapred drwxrwxrwx - mapred hadoop 0 2016-03-06 11:26 /mr-history drwxrwxrwx - hdfs hdfs 0 2016-03-08 15:30 /tmp drwxr-xr-x - hdfs hdfs 0 2016-03-09 04:55 /user [root@node1 ~]# -------- I have user created name 'bat' and bat user can issue same command as shown below - -------- [bat@node1 ~]$ id uid=1009(bat) gid=1007(hdfs2) groups=1007(hdfs2) [bat@node1 ~]$ hadoop fs -ls / dr-------- - hdfs hdfs 0 2016-03-09 15:14 /test1 drwxr-xr-x - bat hdfs 0 2016-03-09 15:10 /bat drwxr-xr-x - hdfs hdfs 0 2016-03-06 11:25 /hdp drwxr-xr-x - mapred hdfs 0 2016-03-06 11:25 /mapred drwxrwxrwx - mapred hadoop 0 2016-03-06 11:26 /mr-history drwxrwxrwx - hdfs hdfs 0 2016-03-08 15:30 /tmp drwxr-xr-x - hdfs hdfs 0 2016-03-09 04:55 /user [bat@node1 ~]$ -------- Is it possible that - bat user will only able to see the directory on which he has permission / owner of those directories - So the expected output will be - [bat@node1 ~]$ id uid=1009(bat) gid=1007(hdfs2) groups=1007(hdfs2) [bat@node1 ~]$ hadoop fs -ls / dr-------- - hdfs hdfs 0 2016-03-09 15:14 /test1 drwxr-xr-x - bat hdfs 0 2016-03-09 15:10 /bat [bat@node1 ~]$ ---- Can we block access to level 1 directories in HDFS/Ranger/etc... in hadoop ? If not why is so ?
... View more
Labels:
- Labels:
-
Apache Hadoop
03-15-2016
05:31 AM
@Sunile Manjee I agree with @vpoornalingam and i see this enhancement is in pending state.
... View more
03-14-2016
04:29 PM
1 Kudo
Hi @Sunile Manjee 1. Yes. You can have multiple instances of zookeeper using ambari. 2. You can have one zookeeper for solr and other zookeeper for rest of other service. You need to make sure you need to specify zookeeper quorum with zookeeper name within config properties of every service. Eg. Below screenshot is for Yarn service - where zookeeper will be replace with your respective zk node name.
... View more
03-14-2016
09:21 AM
3 Kudos
@Lubin Lemarchand You first need to check which and all process are using more amount of memory. You can get using below command - 1. Issue command #top 2. press 'm' 3. check for top 2-3 processes in column '%MEM' Also i will suggest to check free memory using 'free -m' command- Most of the time memory us getting catched up. You can remove cache using below command - #echo 3 > /proc/sys/vm/drop_caches For deleting files you can , delete all log files for hadoop located in /var/log/hadoop/<application-name> Do let me know if it works.
... View more
03-14-2016
08:43 AM
1 Kudo
@Jonas Straub I tried the link too, but didn't it still no luck.
... View more
03-14-2016
07:52 AM
5 Kudos
Tried creating superuser same as hdfs as given in link - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User , but its not working. I tried setting property "dfs.permissions.superusergroup = <newgroup>" but its not working. Can anyone please let me know if this was tested working successfully ?
... View more
Labels:
- Labels:
-
Apache Hadoop
03-01-2016
06:16 PM
1 Kudo
@Smart Solutions In addition to @Karthik Gopal, the ambari startup script is located in -> /etc/init.d/
==> "/etc/init.d/ambari-server" You can either issue - # /etc/init.d/ambari-server <start/stop> command.
... View more
02-29-2016
07:57 PM
2 Kudos
@wsalazar I agree with Neeraj. Yes, You can expand the volume under datanode directory and make it easily available in HDFS. Two basic things you always needs to take care after increasing/extending existing volume is - 1. OS side : Make sure the new volume is reflecting with newer/extended size [ ie. in linux you can use - partprobe/kpart for lvm =resize2fs, for multipath volume =kpartx ]. Once new size is reflected on OS the HDFS automatically picks up the new size for datanodes without restart required. 2. HDFS side: For evenly distributing data across all datanodes you need to run "Rebalancer" from Cluster UI or command line.
... View more
02-29-2016
06:55 PM
Hi, @qbadx qbadx
It seems few datanodes are not able to communicate with namenode/keberos server for getting ticket. I will suggest to check below things - 1. Can you pls check if the hostnames of all machines are correct in /etc/hosts 2. check the principals and corresponding hostname in kerberos for the datanodes and namenodes. 3. Paste the logs [job logs] for more details
... View more