Member since
07-17-2017
43
Posts
6
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1601 | 03-24-2019 05:54 PM | |
1932 | 03-16-2019 04:51 PM | |
1932 | 03-16-2019 04:15 AM | |
660 | 08-04-2018 12:44 PM | |
1169 | 07-23-2018 01:35 PM |
03-29-2019
03:43 PM
So it turns out the patch for this hasn't been merged into HDF 3.4.0 yet and despite Ambari enabling SSO for SAM it's broken. See https://github.com/hortonworks/streamline/issues/1330
... View more
03-29-2019
02:11 PM
Can someone point me to the documentation for Single Sign on Support in SAM. Can't find it mentioned anywhere and I can't get it working. I see in the streamline.log where it saw the hadoop-jwt cookie and extracted my username but every call in the ui just returns this error. {"responseMessage":"Not authorized"} Adding some debug logs I can see everything seems to have authenticated. INFO [2019-03-29 09:38:39.621] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - authentication request received...
INFO [2019-03-29 09:38:39.621] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - hadoop-jwt cookie has been found and is being processed
DEBUG [2019-03-29 09:38:39.621] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - JWT token is in a SIGNED state
DEBUG [2019-03-29 09:38:39.621] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - JWT token signature is not null
DEBUG [2019-03-29 09:38:39.626] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - JWT token has been successfully verified
DEBUG [2019-03-29 09:38:39.626] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - JWT token expiration date has been successfully validated
INFO [2019-03-29 09:38:39.626] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - USERNAME: sweeks
DEBUG [2019-03-29 09:38:39.626] [dw-67] c.h.r.a.s.JWTAuthenticationHandler - Issuing AuthenticationToken for user.
DEBUG [2019-03-29 09:38:39.626] [dw-67] c.h.r.a.s.AuthenticationFilter - Request [http://hdp31-df1.dev.example.com:7777/api/v1/config/streamline] user [sweeks] authenticated
DEBUG [2019-03-29 09:38:39.630] [dw-67 - GET /api/v1/config/streamline] c.h.s.s.s.a.StreamlineKerberosRequestFilter - Method: GET, AuthType: jwt, RemoteUser: sweeks, UserPrincipal: u=sweeks&p=sweeks&t=jwt&e=1553870349626, Scheme: http
... View more
- Tags:
- sam
- StreamLine
Labels:
03-24-2019
05:54 PM
Turns out DAS Lite was trying to dump and failing due to it having been shutdown for too long.
... View more
03-24-2019
05:39 PM
I've got a new HDP 3.1.0 installation and I've started seeing failures in the hiveserver2.log about replication. Something is trying to run the following command and it isn't me. This is on a completely isolated system that isn't exposed outside of my vpn so I'm not sure what's going on. repl dump `*` from 71931 with ('hive.repl.dump.metadata.only'='true', 'hive.repl.dump.include.acid.tables'='true') The failure message is this which I'm assuming is normal because the NOTIFICATION_LOG doesn't keep records forever. 2019-03-24T12:31:17,394 ERROR [HiveServer2-Background-Pool: Thread-209]: metastore.HiveMetaStoreClient (:()) - Requested events are found missing in NOTIFICATION_LOG table. Expected: 71932, Actual: 121582. Probably, cleaner would've cleaned it up. Try setting higher value for hive.metastore.event.db.listener.timetolive. Also, bootstrap the system again to get back the consistent replicated state.
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache Hive
03-17-2019
04:18 AM
I submitted KNOX-1828 for this issue and have created a pull request for a patch that appears to work.
... View more
03-16-2019
09:19 PM
There appears to be a bug with the new way Knox is creating topologies. As far as I can tell none of the Knox parameters for gateway.websocket stuff is actually getting applied because nowhere in Knox is the value ever set to 65536. The defaults for the important stuff are integer max value which is a lot higher. The only place with a default value of 65536 is in the Jetty source code so somehow parameters aren't being applied.
... View more
03-16-2019
04:51 PM
And finally typing out the answer for the fourth time since I keep getting logged out. Ambari is setting rm_security_opts in yarn-env.sh to include yarn_jaas.conf. This is incorrect and breaks the yarn app commands. Commenting out that section and restarting yarn makes everything work correctly.
... View more
03-16-2019
04:14 PM
I've figured out part of the issue. For some reason all of the yarn app -status type commands are using the yarn_jaas.conf by default which directs it to use the rm/_HOST@DOMAIN.COM keytab. If I set it to use the zookeeper_client_jaas.conf which is just a generic jaas directing at your clients kerberos cache everything works fine. This seems like a bug as the client is never going to be able to use yarn_jaas.conf. export HADOOP_OPTS='-Djava.security.auth.login.config=/etc/zookeeper/conf/zookeeper_client_jaas.conf'
... View more
03-16-2019
04:15 AM
Finally managed to delete app via curl command and Ambari recreated it after a restart. Still not able to use any of the yarn app commands while logged in with the yarn-ats keytab /etc/security/keytabs/yarn-ats.hbase-client.headless.keytab. Ambari still complains that ATS HBase isn't up but the logs in Yarn for the hbase app look like it's started.
... View more
03-16-2019
03:20 AM
Trying the equivalent with curl does this. curl -k --negotiate -u: -H "Content-Type: application/json" -X PUT http://hdp31-mgt1.dev.example.org:8088/app/v1/services/ats-hbase -d '{ "state": "STARTED"}' {"diagnostics":"Kerberos principal or keytab is missing."}
... View more
03-16-2019
02:39 AM
After enabling Kerberos the Yarn ATS HBase Service quits working. Following the directions to destroy the service don't work due some sort of authentication issue. As you can see in my example I clearly have a Kerberos ticket for the yarn-ats users. I've also checked the Kerberos Mapping to ensure this principal is correct. I don't know what else to check. RULE:[1:$1@$0](yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG)s/.*/yarn-ats/ [yarn-ats@hdp31-edge ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: yarn-ats-hdp31_cluster@DEV.EXAMPLE.ORG
Valid starting Expires Service principal
03/15/2019 21:49:32 03/16/2019 21:49:32 krbtgt/DEV.EXAMPLE.ORG@DEV.EXAMPLE.ORG
renew until 03/22/2019 21:49:32 [yarn-ats@hdp31-edge ~]$ yarn app -start ats-hbase
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:41 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:41 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:42 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -stop ats-hbase
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:50 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:50 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:51 ERROR client.ApiServiceClient: Authentication required [yarn-ats@hdp31-edge ~]$ yarn app -destroy ats-hbase
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 INFO client.RMProxy: Connecting to ResourceManager at hdp31-mgt1.dev.example.org/10.0.3.52:8050
19/03/15 21:49:58 INFO client.AHSProxy: Connecting to Application History server at hdp31-mgt1.dev.example.org/10.0.3.52:10200
19/03/15 21:49:58 ERROR client.ApiServiceClient: Authentication required
... View more
- Tags:
- yarn-ats
Labels:
- Labels:
-
Apache YARN
02-06-2019
06:31 PM
At the time of my comment the source wasn't available and you had to have a support contract to get it. It still doesn't appear there is a way to get the installation without a support contract. However after the merger Cloudera's Data Science Workbench looks very intersting.
... View more
11-16-2018
11:56 PM
@Robert Levas Ambari 2.6.2, I have a ticket open for this but no one at Hortonworks has ever seen it before except for this post. Specifically my issue is the whole "trying to password for demo-121117 got: at" not the warning.Somehow the process interaction with kinit get's a bunch of null characters instead of the response it expected.
... View more
11-16-2018
09:37 PM
Since this is the #1 and only google hit for this issue we need to get whatever the answer is posted here. There is obviously some sort of undefined behavior with how Ambari is reading responses from IPA as the return does look correct.
... View more
11-13-2018
09:30 PM
I've got the exact same issue, after an IPA Patch this weekend I can longer generate keytabs. Also took down some other stuff yet the responses from kinit look correct.
... View more
11-07-2018
07:11 PM
Data Analytics Studio (DAS) however is not open source or free and requires a Hortonworks Subscription.
... View more
09-06-2018
04:58 PM
This isn't limited to cloud installations, any server where /tmp is mounted noexec will have this issue and that is generally considered a security best practice. HDF 3.1.0 did not have this issue
... View more
08-10-2018
05:29 PM
I'm pretty sure this is no longer correct. Oozie supports Kerberos delegation for the Hive2 actions though I have yet to get it all working on the latest release of HDP 2.6
... View more
08-04-2018
12:44 PM
1 Kudo
@Venkat It's an interface that allows tools like Pig, MapReduce and Spark to access and create Hive Tables and Views. See HCatalog
... View more
07-23-2018
01:47 PM
I'm pretty sure the parameter "orc.compress" doesn't apply to tables "stored as textfile". In the error message above it's obvious hive detected snappy and then for some reason ran out of memory. How big is the Snappy file and how much memory is allocated on your cluster for Yarn?
... View more
07-23-2018
01:35 PM
I would just read the table with the LazySimpleSerDe and use the substr() function to extract out the columns. I've found that to be more performant than the RegexSerDe and it's clearer to read. You can either run the substring query directly or put it in a view.
... View more
07-23-2018
01:31 PM
1 Kudo
You can get the explain plan by just adding the keyword EXPLAIN before the sql statement and executing it. Some SQL Tools will automatically trim part of the command so make sure you highlight EXPLAIN and the entire Query. That will just generate the plan and doesn't run the query. The TEZ Summary is a summary of actual work performed so there's no way to get it without running the query.
... View more
07-23-2018
01:17 PM
1 Kudo
Our organization has taken the approach of using Knox for all of that as it doesn't require your BI Tools be in the same domain. Some tools we use support Kerberos but there are a number of caveats that can make it frustrating like ticket renewals and distributing keytabs and Knox didn't require any of that. One note for Knox if you're running really large SQL Statements you'll have to increase the HTTP Request Size.
... View more
07-23-2018
01:14 PM
Unfortunately you're going to find that some programs that support JDBC try to do more than just the query that you're submitting. NetBeans is a classic example because it runs a select count(*) on every query. In this case it results in a JDBC call that the Phoenix Driver doesn't support and probably isn't important. I've never used SoapUI's JDBC Tool but I do know that DBeaver/Eclipse w/DBeaver Plugin both work correctly with Phoenix as does DBVisualizer and Jetbrains Datagrip. If you can use one of those tools you'll have a lot easier time.
... View more
07-19-2018
04:31 PM
Some functionality is there but the ability to browse the hive structure in a visual fashion is gone and some capability like viewing the applicable Ranger policies aren’t possible from jdbc. Definitely a step backwards no matter what.
... View more
07-19-2018
01:43 PM
Superset also doesn't appear to work with Kerberos or Hive HTTP Transport so it's really not a replacement. We're going to have to wait to upgrade until Hortonworks can provided a suitable replacement for browsing Hive via web browser.
... View more
07-09-2018
05:06 PM
Even if you can connect Hive to MariaDB HA you'll run into an issue due to lack of primary keys on some of the tables. I submitted a Jira https://issues.apache.org/jira/browse/HIVE-17306 for this issue as you can't use MySQL Clusters either with Hive.
... View more
07-09-2018
05:00 PM
You're using the Cloudera driver not the Hortonworks driver so you might not get many responses. How are you using the parameter UseNativeQuery in your JDBC Connection String? I think it should be UseNativeQuery=1 but I don't normally use the Cloudera Driver. Can you attach an example of your connection string with the hostname anonymized?
... View more
04-21-2018
05:56 AM
@Ryan LaMothe Did you ever figure out a solution to this. I'm having the same issue on new installations of HDP 2.6.4 and HDF 3.1 on both CentOS 6 and 7. Non Partitioned Streaming works just fine with no errors.
... View more
10-27-2017
12:18 PM
1 Kudo
@Paras Mehta I've been dealing with the same issue for a couple of weeks. The work around that I found was to use the Phoenix Query Server and JDBC Thin Client instead. It doesn't require any of the hadoop resources. However there does appear to be a performance penalty for large numbers of inserts. I'm still trying to track down if it's possible to add the hbase-site.xml to the NiFi class path as hinted at in the Hive Connection Pool but that wouldn't work if you have multiple Hadoop Clusters you're working with. Based on my research the last couple of weeks the NiFi community seems to be pretty anti Phoenix anyway so expect to have to fight with all of the processors due to the slight changes in syntax.
... View more