Member since
04-05-2016
188
Posts
19
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
908 | 10-30-2017 07:05 AM | |
1194 | 10-12-2017 07:03 AM | |
4847 | 10-12-2017 06:59 AM | |
7115 | 03-01-2017 09:56 AM | |
21281 | 01-26-2017 11:52 AM |
08-01-2016
07:30 AM
Please find attached the hbase-site.xml and hbase-env.sh files. Thank you @Ankit Singhal.
... View more
08-01-2016
07:19 AM
2 Kudos
Changing from Oracle 12c to MySQL DB got Hive working without issues.
... View more
08-01-2016
07:16 AM
Changing from Oracle 12c to MySQL resolved this issue. I can't really place my fingers on what it is that made Oozie epileptic with Oracle 12c.
... View more
08-01-2016
07:14 AM
Changing from Oracle 12c to MySQL resolved this issue.
... View more
08-01-2016
07:09 AM
This issue was resolved by restarting the namenode.
... View more
08-01-2016
07:07 AM
1 Kudo
This issue was resolved by restarting the namenode.
... View more
08-01-2016
06:49 AM
I am testing bucket cache implementation on my dev cluster (2 node) before implementing on the production cluster. On the dev, i have 32GB RAM and i have tried to configure bucket cache with the free memory but it always bring down the region server with OOM error. I need a configuration specification to test before i slam it on the production cluster (production cluster has excess of 100GB RAM). Find attached the hortonworks template i used and the error log.
... View more
Labels:
- Labels:
-
Apache HBase
07-21-2016
08:27 AM
@Sunile Manjee I agree with you on the disk-full-problem as this was a case where the log directory was full but it seems the namenode has not recovered from that even after several restarts. I would have thought there's a "-repair" option for the fsck command just like the hbck command. My question: How can we get the namenode to update its metadata so we can resolve this block location issue once and for all?
... View more
07-20-2016
10:17 AM
constantly!
... View more