Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1934 | 05-23-2018 05:29 AM | |
| 4970 | 05-08-2018 03:06 AM | |
| 1685 | 02-09-2018 02:22 AM | |
| 2716 | 01-24-2018 08:37 PM | |
| 6172 | 01-24-2018 05:43 PM |
07-06-2017
03:14 PM
1 Kudo
@Indrek Mäestu ROOT CAUSE I hope hadoop.proxyuser.HTTP.hosts was not set to the host where webhcat was running. RESOLUTION 1. Identify and note the node where WebHCat Server runs 2. Using Ambari HDFS-Configs, check if hadoop.proxyuser.HTTP.hosts is defined in core-site section or change as below hadoop.proxyuser.HTTP.hosts=*
hadoop.proxyuser.HTTP.groups=* 3. If exists, update the parameter to include the WebHCat node name 4. If not, add the parameter and include WebHCat node name to the same 5. Restart all the services that needs restart including Hive 6. Run Hive service check again Hope this helps you.
... View more
06-29-2017
04:08 AM
@Sami Ahmad Hope @Ishan has provided the correct reason for route cause. Let me know if you have any questions.
... View more
06-27-2017
10:00 PM
1 Kudo
@Sami Ahmad Run as below: select count(*)from pa_lane_txn limit 2; In above process it might not triggering any mapreduce jobs and just getting from metadata of table status which might not be updated. If this helps, please 'accept' it.
... View more
06-15-2017
09:52 PM
2 Kudos
@Dhiraj There is no maximum as per my knowledge and again this value depends on the back-end metastore database what you are using. I have tested up 500,000 in production with oracle as back-end. hive.exec.max.dynamic.partitions=500000 There won't be any impact by adding that to whitelist but always suggested to have number so that it won't impact cluster in long term. Example: One user is keeps on increasing partitions, where each partition is very small file, in this case it increase the Namenode metadata which proportionally my effect cluster.
... View more
06-15-2017
04:56 PM
@vperiasamy Thanks for confirm that Ranger ACL will work as-is. I will debug on it. Thanks for the help, you are the best... 🙂
... View more
06-15-2017
04:40 PM
@vperiasamy Awesome. Can you please even let me know the permissions/functionality of below in Ranger KMS UI, it would be helpful if you can share any notes or links Get
Set Key Materials
Get Keys
Get Metadata After installing Ranger KMS even though if the user is not having any permissions on location '/data/protegrity/' from Ranger, and having 'Decrypt EEK' permissions from Ranger KMS UI, user is able to read the data. My question is now, will the Ranger permissions(Read, Write, Create) will not honored on encrypted zone?
... View more
06-15-2017
04:03 PM
Hi team, I have couple of questions on functionality. What I have expected to using Ranger KMS is when the data is written in encrypted zone, the data should be in human readable but as below: $ hdfs dfs -get /data/protegrity/data4.dat ./encrypted_data4.dat
$ cat encrypted_data4.dat
1AY&SX—“#„bd3ƒ'• DE_ENC256®XQy”ª8@¿UuaùfšÆe4@ãoNVÕh¡}69þC$8¤ÌªÒÓ»Ö]\GR®´éXûš™?âëD
}‹]ê~+¨ÑN•IJz?iÄÝ 5ùDüt.ïÆ,+í/–öõZ9õXÙ+]R_#Ä×â6>
¦KœÌ'„J çÜÑâ,OzÝi.Ú^4WG±´±
2P‹qããE¼iåsLH'xH×oÚ6_ˆ'„ôE¦¯î©{_Hç˃ðîËíÒ†t¾+’:ÁÓ‡›°àå7¢@fH“9¾XTd/F'Îc9«þí òûHýÁN‰QO4y5ànG¤wš2¢»< Is this possible using Ranger KMS? Secondly is it possible to do column level encryption in Hive/HBase using Ranger KMS? Example as below: 0: jdbc:hive2://hortonworks.com> select * from table4;
+------------+---------------+---------------+-----------------------+------------------------+---------------------+
| table4.id | table4.fname | table4.lname | table4.fake_prim_nss | table4.fake_secnd_nss | table4.fake_bod_dt |
+------------+---------------+---------------+-----------------------+------------------------+---------------------+
| 1 | Sridhar | Reddy | 123456789 | 123456789 | 1990-03-23 |
| 2 | Happy | Tom | 234567890 | 234567890 | 1971-02-10 |
| 3 | Jun | Yu | 345678901 | 345678901 | 1972-10-23 |
+------------+---------------+---------------+-----------------------+------------------------+---------------------+
5 rows selected (0.255 seconds)
0: jdbc:hive2://hortonworks.com> select id, fname, lname, ptyProtectStr(cast(fake_prim_nss as string),'DE_nss23') as fake_prim_nss, fake_secnd_nss, fake_bod_dt, fake_bod_tms from table4;
+-----+---------+--------+----------------+-----------------+--------------+
| id | fname | lname | fake_prim_nss | fake_secnd_nss | fake_bod_dt |
+-----+---------+--------+----------------+-----------------+--------------+
| 2 | Happy | Tom | 682585704 | 234567890 | 1971-02-10 |
| 1 | Sridhar | Reddy | 115506653 | 123456789 | 1990-03-23 |
| 3 | Jun | Yu | 874950339 | 345678901 | 1972-10-23 |
+-----+---------+--------+----------------+-----------------+--------------+ Thirdly, how Ranger KMS will honor when you set hive doAs=false. Any needful help is highly appreciated. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
-
Apache Ranger
06-12-2017
09:35 PM
1 Kudo
Below script is used to get all the Hive Databases and underlining tables or views or INDEX_TABLES in a cluster into a csv file. This helps in evaluating total counts for metrics or identifying any tables: beeline -u 'jdbc:hive2://zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' --outputformat=csv2 -f table.hql > tableslist.csv In table.hql !tables Hope this helps you.
... View more
Labels:
06-02-2017
08:14 PM
Unfortunately, spark is limited in terms of sqlstandard authorization.
Spark isn't designed to work with hive sqlstandard authorizations due to which creating a table isn't creating default grants. Here are two related JIRA's that mention the same.
https://issues.apache.org/jira/browse/SPARK-8321 https://issues.apache.org/jira/browse/SPARK-12008 You may create the necessary grants after table creation or
Rely on storage based authorization i.e hdfs permissions for authorizations or
use Ranger for hive authorizations .
... View more
06-02-2017
08:10 PM
Thanks @Josh Elser The encoding is done intentionally and hence there is no way to disable/suppress it from REST API Calls, if you wanna do decode it, we need to write Java code accordingly. here is the link: https://hbase.apache.org/book.html#_running_the_shell_in_non_interactive_mode
... View more