Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1738 | 09-11-2019 10:19 AM | |
| 9344 | 11-26-2018 07:04 PM | |
| 2492 | 11-14-2018 12:10 PM | |
| 5344 | 11-14-2018 12:09 PM | |
| 3159 | 11-12-2018 01:19 PM |
06-04-2018
05:28 PM
@Vinay K if one way trust is correctly configured user principals will be able to authenticate using user@AD.REALM. My understanding is now you are asking how then those UPN (user principals names) are going to be authorized by Hadoop services. For this you need to update the auth_to_local rules (in core-site.xml) and add rules for user@AD.REALM to map to user. Then you can set posix/authorization rules for this user (no longer UPN because it has been mapped using auth_to_local) using Ranger or regular hdfs posix permissions/service acl. More here: https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html Note: Please comment on this post rather than creating a new answer thanks!
... View more
06-04-2018
03:06 PM
1 Kudo
@Victor This one is related to the other post you recently created. I suggest you go over this link: https://community.hortonworks.com/articles/82964/getting-started-with-apache-ambari-workflow-design.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-04-2018
03:01 PM
@Kai Chaza Glad to know it worked for you as well. Please take a moment to login and click the "accept" link on the answer.
... View more
06-04-2018
02:58 PM
@Vinay K 1. How can i test one way trust is successfully created or not? > Try to access any kerberized services on your cluster with ticket from your AD. For example kinit user@AD.REALM
hdfs dfs -ls /
# cluster is using mit kerberos in MIT.REAM wich is different thatn AD.REALM, only if one way trust is correctly configured the above will work 2. Users will persist on AD server and services will persist on hadoop cluster. Should i have to create user principal in kerberos database? > No need to create user principals in kerberos database since you have them in AD 3. If yes, Should be have to add principal in kerberos manually whenever new user created in AD server? > No, this will lead to duplicate users and will be very hard to maintain. Keep users in AD only. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-04-2018
02:37 PM
@RC Based on the documentation is a regular expression: Regular expression that defines which attributes to send as HTTP headers in the request. perhaps you can name your list of attributes: httpheader1<br>httpheader2<br>httpheaderN then use a regex like this httpheader[1-N] HTH
... View more
06-04-2018
02:31 PM
@Pirlouis Pirlouis You should use a jdbc/odbc client (instead of direct curl commands to knox-hive). Try this: # beeline
> !connect jdbc:hive2://my_knox_hostname:9443/;ssl=true;sslTrustStore=/var/lib/knox/data-*/security/keystores/gateway.jks;trustStorePassword=knox;transportMode=http;httpPath=gateway/default/hive Above will prompt for user and password (type the same myuser:mypasswd) For more information read here: https://hortonworks.com/blog/secure-jdbc-odbc-clients-access-hiveserver2/ HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-02-2018
02:43 AM
@Kai Chaza I was checking the code differences between HDP 2.6.5 and HDP 2.6.4. And found the following 2 bugs fixes which are present on HDP 2.6.5 but not on HDP 2.6.4 [SPARK-21637][SPARK-21451] These explain why the --conf "spark.hadoop.hive.cli.print.header=true" is not working on 2.6.4 while it works fine on HDP 2.6.5.
... View more
06-02-2018
02:13 AM
1 Kudo
@Kai Chaza I was testing with HDP 2.6.5, sorry about that. For HDP 2.6.4 I was able to make it work by adding the following property:
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
To the /etc/spark2/conf/hive-site.xml (you have to edit the file manually not via ambari)
Here are the results in HDP 2.6.4:
SPARK_MAJOR_VERSION=2 spark-sql
SPARK_MAJOR_VERSION is set to 2, using Spark2
18/06/02 02:06:33 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/06/02 02:06:33 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
spark-sql> select * from test;
id name
1 Felix
2Jhon
Time taken: 3.321 seconds, Fetched 2 row(s)
Please test it and let me know if this works for you!
... View more
06-01-2018
10:24 PM
@Kai Chaza Try to run spark-sql like this: $ SPARK_MAJOR_VERSION=2 spark-sql --conf "spark.hadoop.hive.cli.print.header=true"
spark-sql> select * from test.test3_falbani;
id name
1 Felix
2 Jhon
Time taken: 3.015 seconds
You can also add the above config spark.hadoop.hive.cli.print.header=true to the Custom spark-defaults using ambari. I haven't been able to find one for the borders still but perhaps the above helps you in finding the the solution. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-01-2018
08:49 PM
@Maxim Dashenko I'm glad it got resolved. Please take a moment to login and click the "accept" link on the answer!
... View more