Member since
07-17-2017
43
Posts
6
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2368 | 03-24-2019 05:54 PM | |
3288 | 03-16-2019 04:51 PM | |
3288 | 03-16-2019 04:15 AM | |
1348 | 08-04-2018 12:44 PM | |
2177 | 07-23-2018 01:35 PM |
03-24-2019
05:54 PM
Turns out DAS Lite was trying to dump and failing due to it having been shutdown for too long.
... View more
03-24-2019
05:39 PM
I've got a new HDP 3.1.0 installation and I've started seeing failures in the hiveserver2.log about replication. Something is trying to run the following command and it isn't me. This is on a completely isolated system that isn't exposed outside of my vpn so I'm not sure what's going on. repl dump `*` from 71931 with ('hive.repl.dump.metadata.only'='true', 'hive.repl.dump.include.acid.tables'='true') The failure message is this which I'm assuming is normal because the NOTIFICATION_LOG doesn't keep records forever. 2019-03-24T12:31:17,394 ERROR [HiveServer2-Background-Pool: Thread-209]: metastore.HiveMetaStoreClient (:()) - Requested events are found missing in NOTIFICATION_LOG table. Expected: 71932, Actual: 121582. Probably, cleaner would've cleaned it up. Try setting higher value for hive.metastore.event.db.listener.timetolive. Also, bootstrap the system again to get back the consistent replicated state.
... View more
Labels:
- Labels:
-
Apache Hive
03-17-2019
04:18 AM
I submitted KNOX-1828 for this issue and have created a pull request for a patch that appears to work.
... View more
03-16-2019
09:19 PM
There appears to be a bug with the new way Knox is creating topologies. As far as I can tell none of the Knox parameters for gateway.websocket stuff is actually getting applied because nowhere in Knox is the value ever set to 65536. The defaults for the important stuff are integer max value which is a lot higher. The only place with a default value of 65536 is in the Jetty source code so somehow parameters aren't being applied.
... View more
03-16-2019
04:51 PM
And finally typing out the answer for the fourth time since I keep getting logged out. Ambari is setting rm_security_opts in yarn-env.sh to include yarn_jaas.conf. This is incorrect and breaks the yarn app commands. Commenting out that section and restarting yarn makes everything work correctly.
... View more
03-16-2019
04:14 PM
I've figured out part of the issue. For some reason all of the yarn app -status type commands are using the yarn_jaas.conf by default which directs it to use the rm/_HOST@DOMAIN.COM keytab. If I set it to use the zookeeper_client_jaas.conf which is just a generic jaas directing at your clients kerberos cache everything works fine. This seems like a bug as the client is never going to be able to use yarn_jaas.conf. export HADOOP_OPTS='-Djava.security.auth.login.config=/etc/zookeeper/conf/zookeeper_client_jaas.conf'
... View more
03-16-2019
04:15 AM
Finally managed to delete app via curl command and Ambari recreated it after a restart. Still not able to use any of the yarn app commands while logged in with the yarn-ats keytab /etc/security/keytabs/yarn-ats.hbase-client.headless.keytab. Ambari still complains that ATS HBase isn't up but the logs in Yarn for the hbase app look like it's started.
... View more
03-16-2019
03:20 AM
Trying the equivalent with curl does this. curl -k --negotiate -u: -H "Content-Type: application/json" -X PUT http://hdp31-mgt1.dev.example.org:8088/app/v1/services/ats-hbase -d '{ "state": "STARTED"}' {"diagnostics":"Kerberos principal or keytab is missing."}
... View more
03-16-2019
02:39 AM
After enabling Kerberos the Yarn ATS HBase Service quits working. Following the directions to destroy the service don't work due some sort of authentication issue. As you can see in my example I clearly have a Kerberos ticket for the yarn-ats users. I've also checked the Kerberos Mapping to ensure this principal is correct. I don't know what else to check. RULE:[1:$1@$0](yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG)s/.*/yarn-ats/ [yarn-ats@hdp31-edge ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG
Valid starting Expires Service principal
03/15/2019 21:49:32 03/16/2019 21:49:32 krbtgt/DEV.EXAMPLE.ORG@DEV.EXAMPLE.ORG
renew until 03/22/2019 21:49:32 [yarn-ats@hdp31-edge ~]$ yarn app -start ats-hbase
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:42 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -stop ats-hbase
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:51 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -destroy ats-hbase
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 ERROR client.ApiServiceClient: Authentication required
... View more
Labels:
- Labels:
-
Apache YARN
02-06-2019
06:31 PM
At the time of my comment the source wasn't available and you had to have a support contract to get it. It still doesn't appear there is a way to get the installation without a support contract. However after the merger Cloudera's Data Science Workbench looks very intersting.
... View more