Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3273 | 09-16-2016 11:56 AM | |
1375 | 09-13-2016 08:47 PM | |
5482 | 09-06-2016 11:00 AM | |
3179 | 08-05-2016 11:51 AM | |
5259 | 08-03-2016 02:58 PM |
06-16-2016
10:15 AM
2 Kudos
@ammu ch Can you share the output of this command? /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ******* get <hivehosts> <cluster> hive-site |grep dir And sudo -u hdfs hdfs crypto -listZones|grep <usename>
... View more
06-16-2016
10:04 AM
@henryon wen So was that working before enabling the ranger hdfs plugin?
... View more
06-16-2016
09:53 AM
2 Kudos
@henryon wen Can you please check if you have below properties?. <property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property> See this doc http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hadoop-ha/content/ha-nn-deploy-hue.html
... View more
06-15-2016
03:09 PM
Can you please run insert command in debug mode and share the output?. hive --hiveconf hive.root.logger=DEBUG,console
... View more
06-15-2016
09:01 AM
5 Kudos
@Phoncy Joseph
You can set the input record size in hive to a higher value to reduce the number of mappers but you might need to increase the mapper heap size also. set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
set mapred.min.split.size=100000000; Or Try using hadoop har file achieve to small file into single file. https://hadoop.apache.org/docs/r1.2.1/hadoop_archives.html#Looking+Up+Files
... View more
06-15-2016
08:46 AM
Yes, You can also use s3n instead of s3 as mentioned in the article and make sure secretekey defined in s3n properties.
... View more
06-14-2016
10:15 PM
Sorry didn't get cycle to work on this. I will try it in couple of days. Also lets wait if other experts comment on this.
... View more
06-14-2016
10:08 PM
Hi @Zack Riesland please let me know if you required further info or accept this answer to close this thread.
... View more
06-13-2016
04:46 PM
Yes it is baked by HDP, we only need to make that S3 secret keys are in place. see this doc. https://community.hortonworks.com/articles/25578/how-to-access-data-files-stored-in-aws-s3-buckets.html
... View more
06-13-2016
03:50 PM
3 Kudos
@Zack Riesland You can put it directly through "hdfs fs -put /tablepath s3://bucket/hivetable. If you have partitions in hive table and you can run this command for each partition directory in concurrent mode through a small shell script just to increase the data ingestion speed. And same S3 data can be used again in hive external table. CREATE EXTERNAL TABLE mydata (key STRING, value INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ' '
LOCATION 's3n://mysbucket/';
... View more