Member since
04-28-2016
22
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15565 | 01-30-2019 12:29 AM | |
4772 | 09-28-2018 12:17 AM | |
9477 | 05-15-2018 11:45 PM |
01-30-2019
12:29 AM
1 Kudo
OK I just found it: export HADOOP_ROOT_LOGGER=DEBUG,console
... View more
01-30-2019
12:23 AM
Hi, Executing "hdfs dfs -ls /", we are getting a KrbException: Fail to create credential. (63) - No service creds even with a valid kerberos ticket on a particular Kerberos realm. We've been looking into the source code of the JVM and we think we might find the cause with DEBUG level. According to http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html, there is a loglevel option to the hdfs command. So we launch it like this: hdfs dfs -ls / --loglevel DEBUG But that seems to have no effect (only WARN is getting printed). How can we control the loglevel? Thanks in advance
... View more
01-14-2019
11:50 PM
1 Kudo
Hi, ok I confirm that this really applies to the admin user (or I guess to every superuser). Other users cannot modify. Thanks
... View more
12-27-2018
03:25 AM
Hello, From a non-superuser accout, I created a workflow and I shared it with the group "visualization" in read only mode: The super-user "admin" is member of the "visualization" group and "default" group only. As can be seen from above screenshot, the workflow is not shared with no other user/group. But, I discovered that the user "admin" is able to edit and save the workflow without restrictions: How is it possible that a user is able to edit a workflow when he should not be able to do so? Thanks in advance
... View more
Labels:
- Labels:
-
Cloudera Hue
12-12-2018
02:39 AM
Hi Nnr, I use this command to copy/overwrite the file located in local path /tmp/config-default.xml: hdfs dfs -ls -C /user/hue/oozie/workspaces/ | grep hue-oozie- | xargs -I % sh -c 'hdfs dfs -put -f /tmp/config-default.xml %' Use at your own risk 😉 It is a pity that hdfs does not implement symlinks, it would be much maintainable. Regards
... View more
09-28-2018
12:17 AM
All solved, it was a misconfiguration of HDFS. The property dfs.permissions was set to false (!). Thanks!
... View more
09-27-2018
06:11 AM
Hi Thomas, Thank you for your response. We are not using Sentry. The output of getfacl is: hdfs dfs -getfacl /user/hue/oozie/workspaces/hue-oozie-1538051691.26 # file: /user/hue/oozie/workspaces/hue-oozie-1538051691.26 # owner: SVC_CTOS_SENTILO # group: hue getfacl: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. Incidently, I am even able to edit the file that is at 0600, being owned by another user. I also created a 0600 folder and inside a 0600 file. Same behaviour. Both users are in hadoop and hue group, but that shouldn't be a problem, since as far as I understand it, 0600 means only the owner of the file should be able to read an write, and nobody else. The owner of the file is SVC_CTOS_SENTILO, from Hue as well from hdfs dfs CLI command. Thank you, Best Regards
... View more
09-19-2018
02:42 AM
Hello, I ask here for an advice on hue configuration. We are developing a KDC security-enabled cluster with multiple users belonging to various groups. Currently we rely heavily on Hue and Oozie workflows that are designed from Hue. Users create their workflows under their user in Hue. Workflows of a particular user are not accessible to others from the list of workflows, unless explicitly shared, which is fine. However there are problems we'd like to solve: 1. Other users still can access workspaces of those workflows via HDFS, either with Hue's "File browser" or directly via hdfs command. Particulary from Hue, seems that anyone can access workspace directory and even open its files, even if I explicitly change the dir and files permission to 600. (See screenshot attached) 2. The properties of the launched workflows can be seen by other users in the "Configuration" tab, regardless of their permissions on the workflow. Can those values be hidden somehow? Thanks in advance!
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Hue
-
HDFS
-
Kerberos
05-17-2018
02:53 PM
Nope, that "pip install kudu-python" certainly didn't work for me, claiming "Cannot find installed kudu client.". Oviously, kudu was accessible from CLI, so I guess it searches for some directories that don't get created in parcel install. Thanks
... View more
05-17-2018
09:28 AM
Thank you very much Mike for the follow-up! Our client wanted us to stop using python, so I cannot verify your solution. The system is CentOS 7.4.1708. I will return to this Jira issue in future if there is a need to continue with python-kudu. Best Regards
... View more