Member since
06-23-2014
9
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
90220 | 10-10-2014 08:09 AM |
07-04-2017
12:54 PM
1 Kudo
You mentioned that you still need to fix the 'Under-Replicated Blocks'. This is what I found with google to fix: $ su - <$hdfs_user> $ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files $ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done
... View more
09-10-2014
10:12 AM
Hi Bart, There's not really a problem here. Hive emits that warning, but we've found that in some cases setting metastore.local will avoid bugs (specifically when running on postgresql). So better to have the warning than to have the bug. This might help explain the process directory where you're finding all of those hive-site.xml files: http://blog.cloudera.com/blog/2013/07/how-does-cloudera-manager-work/ CM will automatically generate the metastore uris property based on the configured Hive Metastore Server role's host and port. Unless you have "Bypass Hive Metastore Server" selected, CM will emit the metastore uris property in all client configuration (/etc/hive/conf/hive-site.xml) and all dependent services (impala, hue, etc) that need to talk to Hive. The hive deprecation warning is thrown whether or not the metastore uris are configured. So you should just ignore this error. Thanks, Darren
... View more