Member since
07-17-2017
43
Posts
6
Kudos Received
8
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3158 | 03-24-2019 05:54 PM | |
| 4569 | 03-16-2019 04:51 PM | |
| 4569 | 03-16-2019 04:15 AM |
03-24-2019
05:54 PM
Turns out DAS Lite was trying to dump and failing due to it having been shutdown for too long.
... View more
03-24-2019
05:39 PM
I've got a new HDP 3.1.0 installation and I've started seeing failures in the hiveserver2.log about replication. Something is trying to run the following command and it isn't me. This is on a completely isolated system that isn't exposed outside of my vpn so I'm not sure what's going on. repl dump `*` from 71931 with ('hive.repl.dump.metadata.only'='true', 'hive.repl.dump.include.acid.tables'='true') The failure message is this which I'm assuming is normal because the NOTIFICATION_LOG doesn't keep records forever. 2019-03-24T12:31:17,394 ERROR [HiveServer2-Background-Pool: Thread-209]: metastore.HiveMetaStoreClient (:()) - Requested events are found missing in NOTIFICATION_LOG table. Expected: 71932, Actual: 121582. Probably, cleaner would've cleaned it up. Try setting higher value for hive.metastore.event.db.listener.timetolive. Also, bootstrap the system again to get back the consistent replicated state.
... View more
Labels:
- Labels:
-
Apache Hive
03-16-2019
04:51 PM
And finally typing out the answer for the fourth time since I keep getting logged out. Ambari is setting rm_security_opts in yarn-env.sh to include yarn_jaas.conf. This is incorrect and breaks the yarn app commands. Commenting out that section and restarting yarn makes everything work correctly.
... View more
03-16-2019
04:14 PM
I've figured out part of the issue. For some reason all of the yarn app -status type commands are using the yarn_jaas.conf by default which directs it to use the rm/_HOST@DOMAIN.COM keytab. If I set it to use the zookeeper_client_jaas.conf which is just a generic jaas directing at your clients kerberos cache everything works fine. This seems like a bug as the client is never going to be able to use yarn_jaas.conf. export HADOOP_OPTS='-Djava.security.auth.login.config=/etc/zookeeper/conf/zookeeper_client_jaas.conf'
... View more
03-16-2019
04:15 AM
Finally managed to delete app via curl command and Ambari recreated it after a restart. Still not able to use any of the yarn app commands while logged in with the yarn-ats keytab /etc/security/keytabs/yarn-ats.hbase-client.headless.keytab. Ambari still complains that ATS HBase isn't up but the logs in Yarn for the hbase app look like it's started.
... View more
03-16-2019
03:20 AM
Trying the equivalent with curl does this. curl -k --negotiate -u: -H "Content-Type: application/json" -X PUT http://hdp31-mgt1.dev.example.org:8088/app/v1/services/ats-hbase -d '{ "state": "STARTED"}' {"diagnostics":"Kerberos principal or keytab is missing."}
... View more
03-16-2019
02:39 AM
After enabling Kerberos the Yarn ATS HBase Service quits working. Following the directions to destroy the service don't work due some sort of authentication issue. As you can see in my example I clearly have a Kerberos ticket for the yarn-ats users. I've also checked the Kerberos Mapping to ensure this principal is correct. I don't know what else to check. RULE:[1:$1@$0](yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG)s/.*/yarn-ats/ [yarn-ats@hdp31-edge ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG
Valid starting Expires Service principal
03/15/2019 21:49:32 03/16/2019 21:49:32 krbtgt/DEV.EXAMPLE.ORG@DEV.EXAMPLE.ORG
renew until 03/22/2019 21:49:32 [yarn-ats@hdp31-edge ~]$ yarn app -start ats-hbase
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:42 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -stop ats-hbase
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:51 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -destroy ats-hbase
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 ERROR client.ApiServiceClient: Authentication required
... View more
Labels:
- Labels:
-
Apache YARN
09-06-2018
04:58 PM
This isn't limited to cloud installations, any server where /tmp is mounted noexec will have this issue and that is generally considered a security best practice. HDF 3.1.0 did not have this issue
... View more
08-10-2018
05:29 PM
I'm pretty sure this is no longer correct. Oozie supports Kerberos delegation for the Hive2 actions though I have yet to get it all working on the latest release of HDP 2.6
... View more
08-14-2017
06:29 PM
This won't actually work as the Hive and Ambari database structure doesn't support group replication. Several tables are missing primary keys and will lead to problems in replication. See Group Replication Requirements.
... View more