Member since
04-04-2017
10
Posts
5
Kudos Received
0
Solutions
08-15-2018
03:11 PM
@Jonathan Sneep No i dont want to add the keys to hive-site.xml. Or update the core-site.xml with credential path that is why i am trying to pass hadoop.security.credential.provider.path as part of the beeline command line.
... View more
08-15-2018
02:53 PM
@Jonathan SneepYes i did try that option 2 days ago, "fs.s3a.impl=org.apache.hadoop.fs.s3native.NativeS3FileSystem" it did not work, it threw the error saying pass in access.key and secret.key I followed this article https://support.hortonworks.com/s/article/External-Table-Hive-s3n-bucket Also can we verify that there are no typos inside the jceks file? Yes i am submitting Spark and hdfs distcp jobs with the same jceks.
... View more
08-15-2018
01:22 PM
2 Kudos
Hi, I followed the below document to create my hadoop.security.credential.provider.path file and i am trying to pass it from beeline command string. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_cloud-data-access/content/s3-credential-providers.html But when ever i try to create a external table from S3 data it throws the below error. i even followed the steps from https://community.hortonworks.com/questions/54123/how-to-pass-hadoopsecuritycredentialproviderpath-i.html?childToView=212393 I have whitelisted the property hadoop.security.credential.provider.path in such a way that it can be set at connection time in the JDBC connection string between beeline and hiveserver2. In hive-site:
hive.security.authorization.sqlstd.confwhitelist.append = hadoop.security.credential.provider.path Also tried passing the credential path as part of jdbc string as suggested from above forum answer but no luck it still throws the same error. Can someone please help me. bash-4.2$ beeline --hive-conf hadoop.security.credential.provider.path=jceks://hdfs@clustername/pedev/user/myuser/myuser.jceks -u "jdbc:hive2://zookeeperlist:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;" Beeline version 1.2.1000.2.6.3.0-235 by Apache Hive 0: jdbc:hive2://hostname:2> CREATE EXTERNAL TABLE HDFSaudit_data8 ( access string, action string, agenthost string , cliip string , clitype string , enforcer string , event_count bigint , event_dur_ms bigint , evttime timestamp , id string , logtype string , policy bigint , reason string , repo string , repotype bigint , reqdata string , requser string , restype string , resource string , result bigint , seq_num bigint , sess string ) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 's3a://bucketname/hivetest2'; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.io.InterruptedIOException: doesBucketExist on bucketname: com.amazonaws.AmazonClientException: No AWS Credentials provided by BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider SharedInstanceProfileCredentialsProvider : com.amazonaws.AmazonClientException: Unable to load credentials from Amazon EC2 metadata service) (state=08S01,code=1)
... View more
Labels:
- Labels:
-
Apache Hive
11-02-2017
07:10 PM
Yes had the same issue with comma seperated values,Pipe seperation did fix it. Thanks
... View more
10-17-2017
05:54 PM
@nshetty
Thanks for the response I tried the above solution to run a hive job but it will pick up the ticket in default location i,e "/tmp/krb5cc", it would not pick up the custom location ticket "/tmp/kafka.ticket". so how can i pass this custom ticket cache as part of hadoop commands? Is it possible to pass custome ticket cache as part of commands?
... View more
10-11-2017
06:17 PM
1 Kudo
I have a scenario where we have two clusters Development and production. I have an edge node where I can submit my jobs to both clusters, so every time I submit the jobs to two different clusters I have to kinit with the dev domain id to submit jobs to dev cluster and kinit with prod domain id to submit jobs for prod cluster. If a dev job is running from edgenode in meanwhile if cron triggers to kinit with prod ID and submit jobs, the running dev job fails with Kerberos GSS exception failed Kerberos authentication error. Is there any way I can pass custom ticket cache for two different realms at the same time to submit jobs from same edge node? I went through Kerberos documentation it states everything runs based on login user basis and if someone kinit's in the middle it will update the default cache file /tmp/krb5c* itself.
... View more
Labels:
- Labels:
-
Apache Hive
04-05-2017
08:11 PM
1 Kudo
@Wael Emam and @Vipin Rathor I faced same API issue as vipin, the document definitely needs a cleaning. I have created a policy using Ranger Rest API and i am trying to delete it by using the document's version Request URL : "service/public/api/service/{id}" and i am failing with the error 404. "curl -iv -u 'D****:********' -X DELETE 'http://myserver.devfg.rbc.com:6080/service/public/api/policy/123' HTTP/1.1 404 Not Found
< Server: Apache-Coyote/1.1
Server: Apache-Coyote/1.1
< X-Frame-Options: DENY
X-Frame-Options: DENY
< Content-Length: 0
Content-Length: 0
< Date: Wed, 05 Apr 2017 16:58:36 GMT" But what worked for me is changing the url to "curl -iv -u 'D****:********' -X DELETE 'http://myserver.devfg.rbc.com:6080/service/plugins/policies/123'
... View more