Member since
06-10-2016
24
Posts
2
Kudos Received
0
Solutions
07-20-2017
04:59 AM
Ideally Ranger should take care of HDFS file permission based on the level of permission a user has on Hive tables. Thanks @dsun yeah, we can disable doAs and let hive user have required permission. However, I would like to enable HDFS encryption for hive warehouse database directories where only few user will have access to the encryption key. If I disable doAs then I need to give access to all keys to Hive user. I would like to enforce this to each end user. I able to achieve it through Ranger Tag based policies where I define Hive and HDFS permission. Currently, there is no synchronization happens for managed Hive table hdfs path to Atlas. I created a custom hook for this. It is easy to manage policies and permissions.
... View more
07-18-2017
06:18 PM
@dsun
I followed the link and set up ranger policies(scenario-2 run as end user) correctly. I am able to create Hive database/table only when if I have write permission to the warehouse directory(HDFS policy). I understand, Hive is just on top of HDFS file system and it does not own the directory. But what I am expecting is Ranger has to deduce file permission based on the level of permission the user has for Hive. "If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user."
... View more
07-18-2017
05:21 PM
Thanks @dsun for quick reply. The question was posted multiple times and I am unable to remove it. I changed hive warehouse directory permission and added hive.warehouse.subdir.inherit.perms. The user has all permission for Hive. I am unable to create database. Please suggest.
... View more
07-18-2017
02:22 PM
I have a HDP-2.6 cluster. I would like to control access to Hive tables through Ranger. I would also like to run my queries as an end-user. I followed HDP documentation of Ranger and set up 000 permission for directory /apps/warehouse/hive. What I noticed while working is, Ranger policies doesn't solely work on policies created for Hive(database and tables). Though if a user has WRITE permission defined in Ranger policy, it still needs WRITE permission for the corresponding table's directory in HDFS. If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user. I can give WRITE permission at a database level however, I am worried about a possibility for user removes files of other tables from command line.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
07-12-2017
07:12 PM
I have HDP2.6, non-Kerberos cluster. I installed Ranger KMS. The service fails after startup. Can anyone direct me if I am missing anything? 2017-07-13 00:03:21,648 INFO log - ------------------ Ranger KMSWebApp---------------------
2017-07-13 00:03:21,648 INFO log - provider string = dbks://http@localhost:9292/kms
2017-07-13 00:03:21,649 INFO log - URI = dbks://http@localhost:9292/kms scheme = dbks
2017-07-13 00:03:21,649 INFO log - kmsconf size= 225 kms classname=org.apache.hadoop.conf.Configuration
2017-07-13 00:03:21,649 INFO log - ----------------Instantiating key provider ---------------
2017-07-13 00:03:22,001 INFO RangerKeyStoreProvider - Credential keystore password not applied for KMS; clear text password shall be applicable
2017-07-13 00:03:22,002 ERROR RangerKMSDB - DB Flavor could not be determined
2017-07-13 00:03:23,133 INFO RangerKMSDB - Connected to DB : true
2017-07-13 00:03:23,136 INFO RangerMasterKey - Generating Master Key
2017-07-13 00:03:23,147 INFO AuditProviderFactory - ==> JVMShutdownHook.run()
2017-07-13 00:03:23,147 INFO AuditProviderFactory - JVMShutdownHook: Signalling async audit cleanup to start.
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Starting cleanup
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Waiting up to 30 seconds for audit cleanup to finish.
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Stop called. name=kms.async
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Interrupting consumerThread. name=kms.async, consumer=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Done cleanup
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Waiting to audit cleanup start signal
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Caught exception in consumer thread. Shutdown might be in progress
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Exiting polling loop. name=kms.async
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Calling to stop consumer. name=kms.async, consumer.name=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Audit cleanup finished after 1 milli seconds
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Interrupting ranger async audit cleanup thread
2017-07-13 00:03:23,148 INFO AuditProviderFactory - <== JVMShutdownHook.run()
2017-07-13 00:03:23,148 INFO AuditBatchQueue - Stop called. name=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditBatchQueue - Interrupting consumerThread. name=kms.async.batch, consumer=kms.async.batch.solr
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Exiting consumerThread.run() method. name=kms.async
2017-07-13 00:03:23,149 INFO AuditBatchQueue - Caught exception in consumer thread. Shutdown might be in progress
2017-07-13 00:03:23,149 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Interrupted while waiting for audit startCleanup signal! Exiting the thread...
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.ranger.audit.provider.AuditProviderFactory$RangerAsyncAuditCleanup.run(AuditProviderFactory.java:487)
at java.lang.Thread.run(Thread.java:745)
2017-07-13 00:03:23,149 INFO AuditBatchQueue - Exiting consumerThread. Queue=kms.async.batch, dest=kms.async.batch.solr
2017-07-13 00:03:23,153 INFO AuditBatchQueue - Calling to stop consumer. name=kms.async.batch, consumer.name=kms.async.batch.solr
... View more
Labels:
- Labels:
-
Apache Ranger
12-03-2016
05:40 PM
Thanks @Rajkumar Singh
... View more
12-03-2016
05:29 PM
I came across a situation when inserting data into hive table from another table. The query was processed using two MR jobs. one got successful and another failed. I could see, few records are inserted into the target table. It was obvious to me since there were two MR jobs processed independently and it is not transactional based. I am trying to understand what happens if the same occurs while inserting data into Hive using Spark. If one of the executor/task fails and it reached retry limit, will it completely terminate the job or partial data get inserted into the table? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
07-02-2016
05:17 PM
restarting Ambari service solved the issue for me
... View more
07-01-2016
05:01 AM
Thanks zblanco. I executed ./knoxcli.sh create-alias ldcSystemPassword --cluster default--value Password@123 and restarted Knox. It is working
... View more