Member since
04-03-2019
92
Posts
6
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3466 | 01-21-2022 04:31 PM | |
5909 | 02-25-2020 10:02 AM | |
3574 | 02-19-2020 01:29 PM | |
2586 | 09-17-2019 06:33 AM | |
5630 | 08-26-2019 01:35 PM |
02-10-2020
11:51 AM
@GangWar Here it is. $ id -Gn testuser hadoop wheel hdfs
... View more
02-10-2020
09:05 AM
I have run the following test case several times and got the same result. Context: 1. My HDP cluster uses the simple mode to determine user identity. Kerberos is not enabled. 2. Below is the permission on hdfs folder /data/test
drwxrwxr-x - hdfs hadoop 0 2020-02-07 13:33 /data/test
So hdfs (the super user) is the owner and hadoop is the owner group. Both the owner user and owner group has write permission on the /data/test folder.
Steps:
On an edge node, I used ID command to confirm that the logged on user "testuser" is in hadoop group.
$ id
uid=1018(testuser) gid=1003(hadoop) groups=1003(hadoop),10(wheel), 1002(hdfs) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
However, testuser still ran into "Permission Denied".
$ hadoop fs -put ./emptyfile1.txt /data/test
put: Permission denied: user=testuser, access=WRITE, inode="/data/test":hdfs:hadoop:drwxrwxr-x
Then I used hdfs account to changed the folder owner to be testuser.
$ hadoop fs -chown testuser /data/test
From the same edge node, now testuser ran put command successfully.
Here is my question: why testuser cannot write to the hdfs folder via the owner group permissions?
... View more
Labels:
- Labels:
-
HDFS
-
Hortonworks Data Platform (HDP)
01-31-2020
09:07 AM
@cjervis Thanks. I reviewed the FAQ page, but it does not answer my questions. I guess I'd better wait until tomorrow, because the page mentioned the date February 1, 2020 several times for new launches or other changes.
... View more
01-31-2020
08:22 AM
I plan to get a Cloudera certification and need help on following questions: Question #1. I reviewed the page https://www.cloudera.com/about/training/certification.html, It looks like that CCP Data Engineer is the only certification that has not been suspended or retired. Am I right on this? Question #2. To prepare DE575, the only recommended Cloudera course is "Spark and Hadoop Developer" training course. according to this page. https://www.cloudera.com/about/training/certification/ccp-data-engineer.html. Should I consider other courses? Questions #3. My workplace uses HDP. Do I need to get familiar with products like CDH before taking the exam?
... View more
Labels:
- Labels:
-
Certification
01-15-2020
10:02 AM
@Shelton @EricL Thank you both. the correct ACL spec is group::r-x Now the following command works. sudo -u zeppelin hadoop fs -ls /warehouse/tablespace/managed/hive/test1 From what I just ran into, I feel that, by design, Hive takes extra effort to prevent users from accessing managed table files directly. I will follow that design and access Hive managed table only through Hive.
... View more
01-14-2020
05:09 PM
I tried the following command # sudo -u hdfs hadoop fs -setfacl -m g::rx /warehouse/tablespace/managed/hive/test1 But I got the error -setfacl: Invalid type of acl in <aclSpec> :g::rx The acl spec is to modify the owning group permission to rx. Any suggestion?
... View more
01-14-2020
09:49 AM
I might have found the reason. I ran the following command as hdfs, which is the superuser of hdfs. $ hadoop fs -getfacl /warehouse/tablespace/managed/hive/test1 # file: /warehouse/tablespace/managed/hive/test1 # owner: hive # group: hadoop user::rwx user:hive:rwx group::--- mask::rwx other::--- default:user::rwx default:user:hive:rwx default:group::--- default:mask::rwx default:other::--- The output, as I understand, shows that the group owner has no permission on the folder. My guess is that, HDP Hive uses ACL to limit direct access to files behind managed tables. HDP Hive tries to force accessing to managed tables only through Hive.
... View more
01-14-2020
08:05 AM
# hdfs groups zeppelin zeppelin : hadoop zeppelin On the name node, # id zeppelin uid=1018(zeppelin) gid=1003(hadoop) groups=1003(hadoop),1005(zeppelin)
... View more
01-13-2020
09:05 PM
I am using HDP. The inode in the following code is a managed hive table. # id zeppelin uid=1017(zeppelin) gid=1003(hadoop) groups=1003(hadoop),1005(zeppelin) # sudo -u zeppelin hadoop fs -ls /warehouse/tablespace/managed/hive/test1 ls: Permission denied: user=zeppelin, access=READ_EXECUTE, inode="/warehouse/tablespace/managed/hive/test1":hive:hadoop:drwxrwx--- The user zeppelin is in hadoop group, which has full permisison on the hdfs folder. So why do I get the permission error?
... View more
Labels:
11-20-2019
09:58 AM
@ManuelCalvo Thanks. Your solution works. My environment does not have Kerberos and Ranger enabled. So, I skipped step 2 For step 4, hbase.coprocessor.master.classes = org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor Also, as I found out, Atlas currently only stores HBase metadata and does not store HBase lineage data as it does for Hive.
... View more