Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1788 | 09-11-2019 10:19 AM | |
| 9427 | 11-26-2018 07:04 PM | |
| 2560 | 11-14-2018 12:10 PM | |
| 5563 | 11-14-2018 12:09 PM | |
| 3244 | 11-12-2018 01:19 PM |
06-15-2018
01:17 PM
1 Kudo
@Ivan Diaz AFAIK If you dont select an authentication method users can type any username and password. You need to select an authentication method other than NONE. If you dont have LDAP and are not willing to do KERBEROS perhaps you can configure PAM -> https://community.hortonworks.com/articles/591/using-hive-with-pam-authentication.html If you think this answer has helped address your question please remember to take a moment to login and click the "accept" link on the answer.
... View more
06-15-2018
01:06 PM
@Pirlouis Pirlouis As I mentioned you should check hiveserver2 to find more clues as why we are getting 401. Since you mentioned the hive-site.xml is not managed by ambari - I highly suspect the proxy users for webhcat could be missing and perhaps leading to this problem. To enable debug on hiveserver2 just set root logger to debug on the hive-log4j.properties
... View more
06-15-2018
12:54 PM
1 Kudo
@Tom C How can we make RANGER permissions more important than HDFS ones ? When HDFS authorization is configured with Ranger plugin initially it will try to find a ranger hdfs policy matching the access. If policy matching the resource/user or group is found it will be enforced and without checking for hdfs posix level permissions the user will be authorized or denied depending on the policy. Only if no policies are found matching the resource/user or group then it will fallback to hdfs posix level permissions. Why is not working on the above example? On your above example I see you created a policy only for resource slash /, if you intended all path you should use slash and wildcard /* Furthermore the error you show is Permission denied: user=test Since user is not listed as part of Select User you should make sure test user belongs to hadoop, hdfs or HDFS_Admin groups. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-14-2018
05:43 PM
@Ivan Diaz Based on the above details I guess hive.server2.authentication=NONE and hive_security_authorization=RANGER . You can restrict user access by setting hive.server2.authentication to KERBEROS/LDAP or other supported authentication methods different than NONE. . Furthermore, you can also add ranger policies to allow access only to non anonymous users while having hive_security_authorization=RANGER (hive ranger plugin on). For this you need to add policies on hive repo for group/user and set correct permissions. Once this is correctly configured even when users are able to authenticate with no user/password they wont be able to perform any actions on hiveserver2. . HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-14-2018
01:09 PM
@Snehal Shelgaonkar I'm glad to hear this worked for you. Please take a moment to login and click the "accept" link on the answer.
... View more
06-13-2018
04:42 PM
Sharing yarn application logs could help us review this issue. At least share the full error stack and let us know from which log you are getting it from. Lastly if you run on yarn-client mode do you still see this error?
... View more
06-13-2018
01:51 PM
@priyal patel Increasing driver memory seems to help then. If OOM issue is no longer happening then I recommend you open a separate thread for the performance issue. On any case to see why is taking long you can check the Spark UI and see what job/task is taking time and on which node. Then you can also review the logs for more information yarn logs -applicationId <appId> HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-13-2018
12:15 PM
@Hemant Kumar
Here is an article on how to start a processor group by rest api
https://community.hortonworks.com/articles/110096/start-process-group-using-nifi-rest-api.html
For you first question, if you are looking to update the processor configuration please review:
https://nifi.apache.org/docs/nifi-docs/rest-api/index.html PUT /processor/{id}
HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-13-2018
12:09 PM
1 Kudo
@Anjali Shevadkar @Anurag Mishra What about search a policy by username: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/ranger_rest_api_policy_search.html service/public/api/policy?userName=anurag HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-13-2018
12:02 PM
@JAy PaTel Try to run in yarn cluster mode and see if your application runs fine then. ./bin/spark-submit --class com.apache.<ClassName> --master yarn-cluster /root/<my_jar.jar> /<input_path_of_HDFS> /<output_path_of_HDFS>
... View more