Member since
06-10-2016
24
Posts
2
Kudos Received
0
Solutions
07-27-2017
06:06 PM
I am using Atlas-0.8 and I am able to create Tag and associate it with any entity. When the entity is deleted e.g. Hive database/table, the tag associated with the database/table is not removed. Is this expected behavior in 0.8v?
... View more
Labels:
07-20-2017
04:59 AM
Ideally Ranger should take care of HDFS file permission based on the level of permission a user has on Hive tables. Thanks @dsun yeah, we can disable doAs and let hive user have required permission. However, I would like to enable HDFS encryption for hive warehouse database directories where only few user will have access to the encryption key. If I disable doAs then I need to give access to all keys to Hive user. I would like to enforce this to each end user. I able to achieve it through Ranger Tag based policies where I define Hive and HDFS permission. Currently, there is no synchronization happens for managed Hive table hdfs path to Atlas. I created a custom hook for this. It is easy to manage policies and permissions.
... View more
07-18-2017
06:18 PM
@dsun
I followed the link and set up ranger policies(scenario-2 run as end user) correctly. I am able to create Hive database/table only when if I have write permission to the warehouse directory(HDFS policy). I understand, Hive is just on top of HDFS file system and it does not own the directory. But what I am expecting is Ranger has to deduce file permission based on the level of permission the user has for Hive. "If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user."
... View more
07-18-2017
05:21 PM
Thanks @dsun for quick reply. The question was posted multiple times and I am unable to remove it. I changed hive warehouse directory permission and added hive.warehouse.subdir.inherit.perms. The user has all permission for Hive. I am unable to create database. Please suggest.
... View more
07-18-2017
02:22 PM
I have a HDP-2.6 cluster. I would like to control access to Hive tables through Ranger. I would also like to run my queries as an end-user. I followed HDP documentation of Ranger and set up 000 permission for directory /apps/warehouse/hive. What I noticed while working is, Ranger policies doesn't solely work on policies created for Hive(database and tables). Though if a user has WRITE permission defined in Ranger policy, it still needs WRITE permission for the corresponding table's directory in HDFS. If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user. I can give WRITE permission at a database level however, I am worried about a possibility for user removes files of other tables from command line.
... View more
Labels:
07-18-2017
02:01 PM
I have a HDP-2.6 cluster. I would like to control access to Hive tables through Ranger. I would also like to run my queries as an end-user. I followed HDP documentation of Ranger and set up 000 permission for directory /apps/warehouse/hive. What I noticed while working is, Ranger policies doesn't solely work on policies created for Hive(database and tables). Though if a user has WRITE permission defined in Ranger policy, it still needs WRITE permission for the corresponding table's directory in HDFS. If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user. I can give WRITE permission at a database level however, I am worried about a possibility for user removes files of other tables from command line. Can you give/direct me to design practices for managing Hive tables with Ranger policies without impersonation?
... View more
Labels:
07-18-2017
01:57 PM
I have a HDP-2.6 cluster. I would like to control access to Hive tables through Ranger. I would also like to run my queries as an end-user. I followed HDP documentation of Ranger and set up 000 permission for directory /apps/warehouse/hive. What I noticed is, Ranger policies doesn't solely work on policies created for Hive(database and tables). Though if a user has WRITE permission defined in Ranger policy, it still needs WRITE permission for the corresponding table's directory in HDFS. If my database has 1000+ tables and a user needs WRITE permission only for 200 tables, then I have to create ranger HDFS policy(s) for those 200 directories with WRITE permission to the user. I can give WRITE permission at a database level however, I am worried about a possibility for user removes files of other tables from command line. Can you give/direct me to design practices for managing Hive tables with Ranger policies without impersonation?
... View more
Labels:
07-12-2017
07:12 PM
I have HDP2.6, non-Kerberos cluster. I installed Ranger KMS. The service fails after startup. Can anyone direct me if I am missing anything? 2017-07-13 00:03:21,648 INFO log - ------------------ Ranger KMSWebApp---------------------
2017-07-13 00:03:21,648 INFO log - provider string = dbks://http@localhost:9292/kms
2017-07-13 00:03:21,649 INFO log - URI = dbks://http@localhost:9292/kms scheme = dbks
2017-07-13 00:03:21,649 INFO log - kmsconf size= 225 kms classname=org.apache.hadoop.conf.Configuration
2017-07-13 00:03:21,649 INFO log - ----------------Instantiating key provider ---------------
2017-07-13 00:03:22,001 INFO RangerKeyStoreProvider - Credential keystore password not applied for KMS; clear text password shall be applicable
2017-07-13 00:03:22,002 ERROR RangerKMSDB - DB Flavor could not be determined
2017-07-13 00:03:23,133 INFO RangerKMSDB - Connected to DB : true
2017-07-13 00:03:23,136 INFO RangerMasterKey - Generating Master Key
2017-07-13 00:03:23,147 INFO AuditProviderFactory - ==> JVMShutdownHook.run()
2017-07-13 00:03:23,147 INFO AuditProviderFactory - JVMShutdownHook: Signalling async audit cleanup to start.
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Starting cleanup
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Waiting up to 30 seconds for audit cleanup to finish.
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Stop called. name=kms.async
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Interrupting consumerThread. name=kms.async, consumer=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Done cleanup
2017-07-13 00:03:23,148 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Waiting to audit cleanup start signal
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Caught exception in consumer thread. Shutdown might be in progress
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Exiting polling loop. name=kms.async
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Calling to stop consumer. name=kms.async, consumer.name=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Audit cleanup finished after 1 milli seconds
2017-07-13 00:03:23,148 INFO AuditProviderFactory - JVMShutdownHook: Interrupting ranger async audit cleanup thread
2017-07-13 00:03:23,148 INFO AuditProviderFactory - <== JVMShutdownHook.run()
2017-07-13 00:03:23,148 INFO AuditBatchQueue - Stop called. name=kms.async.batch
2017-07-13 00:03:23,148 INFO AuditBatchQueue - Interrupting consumerThread. name=kms.async.batch, consumer=kms.async.batch.solr
2017-07-13 00:03:23,148 INFO AuditAsyncQueue - Exiting consumerThread.run() method. name=kms.async
2017-07-13 00:03:23,149 INFO AuditBatchQueue - Caught exception in consumer thread. Shutdown might be in progress
2017-07-13 00:03:23,149 INFO AuditProviderFactory - RangerAsyncAuditCleanup: Interrupted while waiting for audit startCleanup signal! Exiting the thread...
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.ranger.audit.provider.AuditProviderFactory$RangerAsyncAuditCleanup.run(AuditProviderFactory.java:487)
at java.lang.Thread.run(Thread.java:745)
2017-07-13 00:03:23,149 INFO AuditBatchQueue - Exiting consumerThread. Queue=kms.async.batch, dest=kms.async.batch.solr
2017-07-13 00:03:23,153 INFO AuditBatchQueue - Calling to stop consumer. name=kms.async.batch, consumer.name=kms.async.batch.solr
... View more
Labels:
03-29-2017
10:14 AM
Could you already solve the problem?
... View more
12-03-2016
06:17 PM
Thanks for the quick reply. Our cluster is not Kerberized. I went through the link. However, it explains start the thrift server manually. Can't we run more than one thrift server? one in each node
... View more
12-03-2016
05:56 PM
We have a HDP-2.4 cluster and Spark thrift server is installed on two nodes. When I start Spark thrift servers from Ambari, It starts one of them which is installed on the same node where Hive is installed and another fails. When I looked at the log file of the failed node, it says failed to start thrift server on the host where Hive is installed. Should Hive and Spark thrift servers installed on the same node? Can't we run more than one thrift server?
... View more
Labels:
12-03-2016
05:40 PM
Thanks @Rajkumar Singh
... View more
12-03-2016
05:29 PM
I came across a situation when inserting data into hive table from another table. The query was processed using two MR jobs. one got successful and another failed. I could see, few records are inserted into the target table. It was obvious to me since there were two MR jobs processed independently and it is not transactional based. I am trying to understand what happens if the same occurs while inserting data into Hive using Spark. If one of the executor/task fails and it reached retry limit, will it completely terminate the job or partial data get inserted into the table? Thanks in advance.
... View more
Labels:
07-02-2016
05:17 PM
restarting Ambari service solved the issue for me
... View more
07-01-2016
05:01 AM
Thanks zblanco. I executed ./knoxcli.sh create-alias ldcSystemPassword --cluster default--value Password@123 and restarted Knox. It is working
... View more
06-30-2016
02:31 PM
Thanks Divakar
... View more
06-30-2016
12:06 PM
1 Kudo
I have followed link to create alias to store password. First I used plain-text in the topology for password it was working.However when I replaced the password with alias, I am running into 401 error. Am I missing anything? attached my topology xml. ./knoxcli.sh create-alias ldcSystemPassword --cluster MyCluster --value Password@123
... View more
Labels:
06-29-2016
04:01 PM
Thanks a lot Dave
... View more
06-29-2016
03:45 PM
I have installed hue 2.6 on CentOS 6.5 and the changed I made to /etc/hue/conf.empty not reflecting in Hue UI configuration tab. anything I am missing?
... View more
Labels:
06-20-2016
11:01 AM
1 Kudo
The link explains create and export service principal in linux machine KDC server. How to create service principal for hue in windows active directory? I followed below steps but it didn't work 1) created user 'hue' in AD 2) setspn -A hue/hue-server-fdqn hue 3) ktpass /princ hue/hue-server-fdqn@ABC.COM /pass p@$$word /ptype KRB5_NT_PRINCIPAL /out hue.keytab I copied the file hue.keytab into host where hue is running. when I listed the keytabs hue.keytab using klist -kt hue.keytab it shows only user principal hue@ABC.COM any help
... View more
Labels:
06-14-2016
02:37 PM
Our cluster running HDP 2.4.2 and Ambari 2.2.2 I followed link and opened Oozie Web UI in IE-11 it worked fine. However facing "GSSHeader did not find the right tag" on Chrome. any idea?
... View more
06-10-2016
10:16 AM
We have a 6 node Kerberized cluster(HDP 2.4.2). We are using Hive View for querying hive tables. when we executed queries, the jobs were submitted with logged in user name. However after I installed Ranger, the jobs are being submitted as user hive. Are we missing any setup? Thanks in advance.
... View more
Labels: