I'm using Cloudera 5.10.1-1.cdh5.10.1.p0.10.
Important configuration overview:
- direct Active Directory integration with Windows Server 2016
- 1 Balancer
- 4x DataNode
- 1 HttpFS
- 2x NFS Gateway
- 1x NameNode
- 1x Secondary Name Node
I'm able to use a active directory user, initialize with kinit the session and then write / delete using the hdfs dfs commands.
However, I mounted the a nfs target on a different server using
$ mount -t nfs -o vers=3,proto=tcp,nolock <nfs_server_hostname>:/ /hdfs_nfs_mount
but when I'm writing a file I get an error like:
Cannot create directory 'test'. Permission denied
Strange is that the folder is normal available. When I copy a file I get the same error message and the file shows up as a 0 byte file. It looks like a latency problem but I have no clue how to go further.
I upgraded already my CDH version, created a second gateway, tested AD integration, used different servers but I'm stuck. Couldn't find anything like this on the web or in the documentation since most of the Permission denied problems are security issues.
Is there any update on this? I am having the same problem.
Not really. But I have found a workaround for me.
- I discovered that it was working from a Windows OS when I set in the Services for Network File System => Properties of Client for NFS all permissions
- I disabled authentication on HDFS whereever possible (leaving general kerberos in tact, but as I understoot the HDFS permission is not explicitly set)
So it looks like a permission problem. Perhaps someone can explain how to set the settings right in an Kerberos (with direct Active Directory) but I assume it's not obvious to find a general approach for this.