Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3978 | 10-13-2017 09:42 PM | |
| 7475 | 09-14-2017 11:15 AM | |
| 3799 | 09-13-2017 10:35 PM | |
| 6034 | 09-13-2017 10:25 PM | |
| 6602 | 09-13-2017 10:05 PM |
05-05-2017
09:18 AM
1 Kudo
I removed all java using yum uninstall java*...Then downloaded the fresh java from oracle website. Make sure you are not using OPEN java anywhere. Then set Java path properly. It should work then..
... View more
05-04-2017
12:02 PM
This issue was resolved in OneFS 8.0.0.4. See the release notes for the Resolved Issue. During failover to a secondary ResourceManager, HDFS MapReduce jobs might have been disrupted. This could have occurred because, during failover, OneFS renegotiated the connection to the ResourceManager using the same Kerberos ticket but with a different name. As a result, the request to connect to the secondary ResourceManager could not be authenticated and access was denied.181448
... View more
05-01-2017
07:25 AM
Thanks, I got it working.
... View more
04-28-2017
11:00 AM
at the end it turned out not to be a enctype issue 😉 The problem was that the hostname of the solr server in the curl URL did not match the hostname part of solr's kerberos principal. After putting the FQDN in the call: curl --negotiate -u : 'http://mgr-node1.test.demo:8983......... the error disappeard and everything works nice.
... View more
04-27-2017
08:51 AM
1 Kudo
I have not found good information on the ZK store but HMS HA requires at a minimium the DB Token store. Something from the users sessions, presumably the delegation token, is saved in either memory, the metastore db, or ZK. The first means that you would need a new token/session if you ended up connecting to a different HMS instance. The other two allow for HA and I image ZK just adds more fault tolerance as most installs have a minimum of 3 versus 1 RDBMS. I don't think the latter is used or tested extensively though and have seen other people have issues with it, specifically when Kerberos is enabled.
... View more
04-25-2017
11:43 AM
@mbigelow Did you find a chance to have a look at my scripts and my problem. Looks like I am stuck on this, Cannot figure out any solution but only option is to use cron jobs which I don't want to
... View more
04-25-2017
08:36 AM
It is an internal table. The creation process was using the HUE GUI to 'Create a new table manually' in the Metastore Manager for the Hive default database. I didn't choose the 'Create a new table from a file' option, which allows a user to specify if it should be an external table. I updated my reply to saranvisa's use cases, and the underlying HDFS files were deleted only if the HUE user who dropped the table was its creator. Fortunately, I do have access to HDFS superuser via the command line and was able to delete the table from my prior incident. Thanks for providing an alternative in the event that is not the case, especially since when deployed most users won't have command line access let alone HDFS superuser. Sounds like the trade-off is ease of use vs. level of security.
... View more
04-25-2017
07:54 AM
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded Increase the container and heap size. I am not sure whether it is a mapper or reducer that is failing but here are the settings to look into. set hive.exec.reducers.bytes.per.reducer= set mapreduce.map.memory.mb= set mapreduce.reduce.memory.mb= set mapreduce.map.java.opts=<roughly 80% of container size> set mapreduce.reduce.java.opts=<roughly 80% of container size>
... View more
04-21-2017
03:10 AM
Issue ssems to be resolved, once our admin found out the reason of hiveserver2 restarting frequently.
... View more
04-19-2017
02:39 PM
man took a bit of trial and error. The issue with the first run is that it is returning an empty line. I tried a few awk specific was to get around it but they didn't work. So here is a hack. And using the variable withing awk as well. DC=PN
hdfs dfs -ls /lib/ | grep "drwx" | awk '{system("hdfs dfs -count " $8) }' | awk '{ gsub(/\/lib\//,"'$DC'"".hadoop.hdfs.",$4); print $4 ".folderscount",$1"\n"$4 ".filescount",$2"\n"$4 ".size",$3;}'
PN.hadoop.hdfs.archive.folderscount 9
PN.hadoop.hdfs.archive.filescount 103
PN.hadoop.hdfs.archive.size 928524788
PN.hadoop.hdfs.dae.folderscount 1
PN.hadoop.hdfs.dae.filescount 13
PN.hadoop.hdfs.dae.size 192504874
PN.hadoop.hdfs.schema.folderscount 1
PN.hadoop.hdfs.schema.filescount 14
PN.hadoop.hdfs.schema.size 45964
DC=VA
hdfs dfs -ls /lib/ | grep "drwx" | awk '{system("hdfs dfs -count " $8) }' | awk '{ gsub(/\/lib\//,"'$DC'"".hadoop.hdfs.",$4); print $4 ".folderscount",$1"\n"$4 ".filescount",$2"\n"$4 ".size",$3;}'
VA.hadoop.hdfs.archive.folderscount 9
VA.hadoop.hdfs.archive.filescount 103
VA.hadoop.hdfs.archive.size 928524788
VA.hadoop.hdfs.dae.folderscount 1
VA.hadoop.hdfs.dae.filescount 13
VA.hadoop.hdfs.dae.size 192504874
VA.hadoop.hdfs.schema.folderscount 1
VA.hadoop.hdfs.schema.filescount 14
VA.hadoop.hdfs.schema.size 45964
... View more