Member since
12-08-2015
34
Posts
19
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1418 | 07-26-2016 06:52 PM | |
1482 | 06-21-2016 06:45 PM | |
7910 | 05-11-2016 06:11 PM |
05-11-2016
07:09 PM
@Chris McGuire, I'm not sure you're using the Hive Streaming API, then. I'm not sure how Spark Streaming is setup to write out to hive, so it could be behaving correctly.
... View more
05-11-2016
06:11 PM
2 Kudos
Hi @Chris McGuire, Can you please provide an "hdfs dfs -ls -R <table-folder>" Compaction only operates on tables with delta directories. I suspect that the method you're using (SaveMode.Append) is just appending to the existing partition (or adding a new partition) and not actually creating deltas. Best, Eric
... View more
01-25-2016
02:58 PM
4 Kudos
I would generally say that if you're talking SSO for ODBC on Windows, the easiest option client side is Kerberos. As long as the user is logged into his workstation as an AD user that has rights on the Hive tables, the Kerberos ticket the OS gets at login will be sufficient for authentication to Hive over the SASL transport. You just need to make sure you've kerberized your cluster using one of your Active Directory Domain Controllers as the KDC (http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_Ambari_Security_Guide/content/ch_configuring_amb_hdp_for_kerberos.html). At that point just configure the ODBC driver for kerberos. Again, the only client side requirement is that the machine is joined to the domain and the user logs in with an authorized account.
... View more
01-11-2016
03:30 PM
1 Kudo
A few things to double check: 1. Is there any message on the Hiveserver2 that correlates to the Beeline error?
2. Can you double check that you have the Unlimited Strength Key JCE Policy installed correctly and that alternatives is pointing at the copy of java that has this poilcy (I'm noticing that your KRB Ticket only supports AES256)
3. Set system property sun.security.krb5.debug to true on your JVM and see if you can get any details from the debug logs.
... View more
12-31-2015
06:12 PM
In the short run you can always look at the metastore database (assuming you're using the db txn manager) and try to clear them manually from the tables there.
... View more
12-22-2015
10:28 PM
@Darpan Patel I would double check those host names and that the ports are open.
... View more
12-22-2015
02:23 PM
@Darpan Patel Haven't tried setting that up in a NameNodeHA environment yet, but it seems that it is trying to resolve the reference to the NN Service Name in DNS and failing. As for the Hive error, I'd suggest stopping ambari-server, doing a kdestroy for the user as which ambari-server runs and a kinit as the ambari-server user before starting it again.
... View more
12-21-2015
07:01 PM
1 Kudo
No worries, I hope some of these things have been fixed since I went through this back in September (#4 should be resolved in Ambari 2.1.2). The Kdestroy/Kinit thing was definitely strange, never did work out why that was needed.
... View more
12-21-2015
06:40 PM
1 Kudo
So I had a bunch of trouble with these, here are some of the things to note:
When creating the view in Ambari don't use the "Local Ambari Managed Cluster" option, always use the custom when you have a kerberized cluster. Definitely read the instructions carefully (i.e. this one: http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_ambari_views_guide/content/section_pig_view_kerberos_config.html) per @Neeraj Sabharwal. Stop Ambari Server, do a kdestroy for the user ambari-server run as, do a kinit for the ambari user using it's proper keytab as the ambari linux user, then start ambari-server again. Do this procedure each time you restart Ambari Server. For the pig view, there was a known issue where you needed to add: ,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar to your templeton.libjars for WebHCat (https://issues.apache.org/jira/browse/AMBARI-13096). Check your Ambari version...
... View more
12-18-2015
08:16 PM
3 Kudos
If you wish to reference a file in S3 from a pig script you might do something like this: set fs.s3n.awsSecretAccessKey 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
set fs.s3n.awsAccessKeyId 'xxxxxxxxxxxxxxxxxxxxx';
A = load 's3n://<bucket>/<path-to-file>' USING TextLoader; If you're on HDP 2.2.6, you'll likely see this error: Error: java.io.Exception, no filesystem for scheme: s3n
The following steps resolve this issue: In core-site.xml add: <property>
<name>fs.s3n.impl</name>
<value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value>
<description>The FileSystem for s3n: (Native S3) uris.</description>
</property> Then add to the MR2 and/or TEZ class path(s): /usr/hdp/${hdp.version}/hadoop-mapreduce/* These configs ensure 2 things: That the worker YARN containers spawned by pig have access to the hadoop-aws.jar file That the worker YARN containers know which class implements the file system type identified by "s3n://" References:
Apache Mailing Lists Topic from Legacy Hortonworks Forums
... View more
- « Previous
-
- 1
- 2
- Next »