Member since
02-28-2016
1
Post
2
Kudos Received
0
Solutions
02-28-2016
09:23 PM
2 Kudos
Does anyone know how to make hive default to S3 so each table does not need to be external? Is this possible? There are articles such as http://blog.sequenceiq.com/blog/2014/11/17/datalake-cloudbreak-2/ which indicate this is possible, but when one does this with the HDP 2.3 it appears the HiveServer2 fails when trying to access the webhdfs location including s3. I set the hive.metastore.warehouse.dir=s3://<bucket>/warehouse and restarted From the call below I'm willing to bet webhdfs is barfing on the syntax. Any ideas? 2016-02-28 14:09:36,339 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --negotiate -u : '"'"'http://<server>:50070/webhdfs/v1s3:/<bucket>/warehouse?op=MKDIRS&user.name=hdfs'"'"' 1>/tmp/tmp_QkaO7 2>/tmp/tmpFSumMx''] {'logoutput': None, 'quiet': False}
2016-02-28 14:09:36,360 - call returned (0, '')
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive