Member since
05-17-2016
46
Posts
22
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3177 | 06-01-2018 11:40 AM | |
1253 | 06-30-2017 10:12 AM | |
1505 | 06-30-2017 10:09 AM | |
929 | 06-30-2017 10:04 AM | |
941 | 06-30-2017 10:03 AM |
08-02-2018
05:30 AM
@Seongmin Park It looks like you are trying to enable LDAP auth in Nifi. Were you able to access NiFi UI with certificates before enabling ldap authentication? Could you update the values of the below properties from nifi.properties file : nifi.login.identity.provider.configuration.file
nifi.security.user.login.identity.provider
Also, are you using AD or OpenLDAP as your LDAP implementation ?
... View more
06-01-2018
11:40 AM
@Himani Bansal Namenode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. The NameNode ensures that each block is sufficiently replicated. When it detects the loss of a DataNode, it instructs remaining nodes to maintain adequate replication by creating additional block replicas. For each lost replica, the NameNode picks a (source, destination) pair where the source is an available DataNode with another replica of the block and the destination is the target for the new replica Reference : https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#Data+Replication
... View more
07-03-2017
12:30 PM
1 Kudo
@Rishi Currently if your cluster in not kerberised, any user can just export the HADOOP_USER_NAME variable and can perform any activities., there is no way to restrict that.
For example :
[kunal@s261 ~]$ hdfs dfs -ls /mapred
Found 1 items
drwxr-xr-x - hdfs hdfs 0 2017-04-24 11:33 /mapred/system
[kunal@s261 ~]$ hdfs dfs -ls /mapred/system
[kunal@s261 ~]$
[kunal@s261 ~]$
[kunal@s261 ~]$
[kunal@s261 ~]$ hdfs dfs -rmr /mapred/system
rmr: DEPRECATED: Please use 'rm -r' instead.
17/04/26 14:30:56 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://s261.openstacklocal:8020/user/kunal/.Trash/Current/mapred
org.apache.hadoop.security.AccessControlException: Permission denied: user=kunal, access=WRITE, inode="/user/kunal/.Trash/Current/mapred":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
Then if you export the above variable, we can delete the file : [kunal@s261 ~]$ export HADOOP_USER_NAME=hdfs
[kunal@s261 ~]$
[kunal@s261 ~]$
[kunal@s261 ~]$ hdfs dfs -rmr /mapred/system
rmr: DEPRECATED: Please use 'rm -r' instead.
17/04/26 14:31:15 INFO fs.TrashPolicyDefault: Moved: 'hdfs://s261.openstacklocal:8020/mapred/system' to trash at: hdfs://s261.openstacklocal:8020/user/hdfs/.Trash/Current/mapred/system
The only way is to setup kerberos which can fix this issue, even if you export the variable the user is derived from the kerberos principal : [root@krajguru-e1 ~]# kinit kunal
Password for kunal@LAB.HORTONWORKS.NET:
[root@krajguru-e1 ~]#
[root@krajguru-e1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: kunal@LAB.HORTONWORKS.NET
Valid starting Expires Service principal
07/03/2017 12:24:39 07/03/2017 22:24:39 krbtgt/LAB.HORTONWORKS.NET@LAB.HORTONWORKS.NET
renew until 07/10/2017 12:24:34
[root@krajguru-e1 ~]#
[root@krajguru-e1 ~]# hdfs dfs -ls /mapred/
Found 1 items
drwxr-xr-x - hdfs hdfs 0 2017-04-21 11:47 /mapred/system
[root@krajguru-e1 ~]#
[root@krajguru-e1 ~]# export HADOOP_USER_NAME=hdfs
[root@krajguru-e1 ~]#
[root@krajguru-e1 ~]# hdfs dfs -rmr /mapred/system
rmr: DEPRECATED: Please use 'rm -r' instead.
17/07/03 12:25:11 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
rmr: Failed to move to trash: hdfs://e1.openstacklocal:8020/mapred/system: Permission denied: user=kunal, access=WRITE, inode="/mapred/system":mapred:hdfs:drwxr-xr-x
... View more
06-30-2017
10:12 AM
1 Kudo
@Pankaj Degave
This is a known issue tracked in JIRA KNOX-949
... View more
06-30-2017
10:09 AM
1 Kudo
@Pankaj Degave
You can use the below call to get only the required fields mentioned in Ranger UI. curl -o ranger.query --negotiate -u : -X GET "http://<ambari-infra-solr-instance-hostname>:8886/solr/ranger_audits_shard1_replica1/select?q=*%3A*&fq=evtTime%3A%5B2017-06-11T10%3A44%3A00Z+TO+NOW%5D&fl=policy,evtTime,reqUser,repo,resource,resype,access,result,enforcer,cliIP,cluster,event_count&sort=evtTime+desc&start=0&rows=307600&wt=csv&version=2"
Depending on what all logs you want to pull adjust the evtTime, the above query pulls all the audit records, change the evtTime to the timestamp of the first record in ranger.
... View more
06-30-2017
10:04 AM
1 Kudo
@amankumbare Ambari does not set the samaccountname while creating service principals, its AD which randomly populates the value, and if I'm not wrong ambari does not need samaccountname for service principals.
... View more
06-30-2017
10:03 AM
1 Kudo
@amankumbare
As far as I understand, the superuser "hdfs" does not go through authorization in ranger and hence we do not see any audits in Ranger, but we see those audits in hdfs-audit.log file, maintained by hdfs
... View more
03-31-2017
02:48 PM
1 Kudo
You can use the below call which will list all the values for that specific service : http://<Ambari-hostname>:8080/api/v1/clusters/<cluster-name>/configurations/service_config_versions?service_name=HDFS
... View more
03-24-2017
02:56 PM
Can you please check if the postgresql service is running on ambari server (if you are running default embedded database for ambari) # service postgresql status # netstat -ntpl | grep 5432 If not please start postgresql and then try to start ambari-server
... View more
03-24-2017
02:54 PM
Currently Ranger does not have that feature to map LDAP attributes to these fields. Provided the above scenario is when you are syncing LDAP users to Ranger
... View more