Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 17737 | 03-08-2019 06:33 PM | |
| 7166 | 02-15-2019 08:47 PM |
09-06-2016
05:17 PM
Also Make sure that curl is installed in target machine make sure that target machine can see oozie server and able to run the curl command against it In my case I've got the timeout issue and installed curl but didn't work so I added the FQDN in /etc/hosts and it worked perfectly well.
... View more
10-20-2016
02:28 PM
I faced the following error where the shell script is running over 24 hours, and failing to launch hive scripts after 24 hours with following error. Workaround: Increase the following Property Value in hive-site.xml and restart Hive Metastore to the desired need of oozie shell script to continue running.
hive.cluster.delegation.token.renew-interval (Default: 86400000 i.e. 24hrs) hive.cluster.delegation.token.max-lifetime (Default: 604800000 i.e. 7days) yarn application log Stdoutput 16/10/16 17:12:00 [main]: WARN hive.metastore: Failed to connect to the MetaStore Server...
Stdoutput org.apache.thrift.transport.TTransportException: Peer indicated failure: DIGEST-MD5: IO error acquiring password Hive MetaStore Error: ERROR [pool-5-thread-198]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: owner=user, renewer=oozie, realUser=oozie/oozie.host.name@EXAMPLE.COM, issueDate=1476560270232, maxDate=1477165070232, sequenceNumber=51, masterKeyId=714]
... View more
08-12-2016
05:17 AM
@Robert Levas - DEFAULT at the middle worked when I tried this setup. I checked given article and I agree that modifying dfs.namenode.kerberos.principal.pattern was somehow missed while writing this article. I will add that missing step now. Thank you! 🙂
... View more
12-29-2017
02:35 AM
Tip! 🙂 Please make sure to add below line in hbase-indexer-env.sh in order to avoid org.apache.zookeeper.KeeperException$NoAuthException:KeeperErrorCode=NoAuthfor/hbase-secure/blah blah error HBASE_INDEXER_OPTS="$HBASE_INDEXER_OPTS -Djava.security.auth.login.config=<path-of-indexer-jass-file>"
... View more
12-14-2017
04:03 AM
https://community.hortonworks.com/articles/52877/oozie-shell-action-run-hivetez-query-in-shell-scri.html
... View more
01-25-2018
06:40 AM
Hi @Kuldeep Kulkarni Does this tutorial still work with Ambari 2.6?
... View more
08-01-2018
03:57 AM
Hi @Kalyan Das , I think its better to create a new community thread for the issue you are facing . it will be easily viewable by other community users for which they are willing to help . Comments here wont allow posting screenshots and other things.
... View more
07-21-2016
11:40 PM
2 Kudos
This tutorial has been successfully tried on HDP-2.4.2.0 and Ambari 2.2.2.0 . I have my HDP Cluster Kerberized with Namenode HA. . Please follow below steps for Configuring File View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*
hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing file view. For e.g. in my case I'm using admin user to access file view. . sudo -u hdfs hadoop fs -mkdir /user/admin
sudo -u hdfs hadoop fs -chown admin:hdfs /user/admin
sudo -u hdfs hadoop fs -chmod 755 /user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit File view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . . After above steps, you should be able to access your file view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
06-24-2016
06:36 AM
8 Kudos
This tutorial has been successfully tried on HDP-2.4.0.0 and Ambari 2.2.1.0 . I have my HDP Cluster Kerberized and Ambari has been configured for SSL. Note - Steps are same for Ambari with or without SSL. . Please follow below steps for Configuring Pig View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing pig view. For e.g. in my case I'm using admin user to access pig view. . sudo -u hdfs hadoop fs -mkdir /user/admin sudo -u hdfs hadoop fs -chown admin:hdfs /user/adminsudo -u hdfs hadoop fs -chmod 755/user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit Pig view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . After above steps, you should be able to access your pig view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting.
... View more
Labels:
08-03-2016
06:55 PM
@Kuldeep Kulkarni I've all these properties in place. Tez view is working fine, however it's not showing any jobs after we implemented kerberos.
... View more