Member since
10-18-2017
52
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1159 | 01-27-2022 01:11 AM | |
8340 | 05-03-2021 08:03 AM | |
4719 | 02-06-2018 02:32 AM | |
6210 | 01-26-2018 07:36 AM | |
4039 | 01-25-2018 01:29 AM |
01-27-2022
01:11 AM
For future reference: I am on a hbase cluster, and also need access to the hive metastore. It seems that in case the hive-site.xml contains some wrong values, you can have this behavior.
... View more
08-13-2021
12:51 AM
Maybe you are still asking more than what is available? It really depends on what kind of cluster you have available. It depends on following paramaters: 1)cloudera manager-> yarn-> configuration ->yarn.nodemanager.resource.memory-mb (= Amount of physical memory, in MiB, that can be allocated for containers=all memory that yarn can use on 1 worker node) 2)yarn.scheduler.minimum-allocation-mb (container memmory minimum= every container will request this much memory) 3)yarn.nodemanager.resource.cpu-vcores (Container Virtual CPU Cores) 4)how many worker nodes? Cluster with x nodes? I noticed you really are requesting a lot of cores too. Maybe you can try reduce these a bit? This might also be a bottleneck.
... View more
08-11-2021
07:02 AM
Thank you for this very valuable input! (I had somehow missed the response). I see indeed increased latencies, but see that should be neglectable for hot data. I have observed this, but think there is a limit to how much data you can keep 'hot'. This depends on a combination of settings at the level of the hbase catalog properties and the hbcase cluster. We have discussed this also in following thread: https://community.cloudera.com/t5/Support-Questions/simplest-method-to-read-a-full-hbase-table-so-it-is-stored/m-p/317194#M227055 It would be very interesting if a more in depth study would ever be conducted and reported, as this is very relevant for applications with hbase as back-end that require some more advanced querying of the data (like in my case aggregations to compute a heatmap using a high volume of data points).
... View more
05-26-2021
10:52 PM
1 Kudo
Hello @JB0000000000001 Unfortunately, I didn't find any 1 Document explaining the HMaster UI Metrics collectively. Let me know if you come across any Metrics, which isn't clear. I shall review the same & share the required details. If I can't help, I shall ensure I get the required details from our Product Engineering Team to assist as well. - Smarak
... View more
05-03-2021
08:03 AM
For future reference: I would like to add that the reason for the observed behaviour was an overcommit of the memory. While I am writing, the memory used of the box at some point comes so close to the maximum available on the regionservers, that the problems start. In my example at the start of writing I use about 24/31GB on the regionserver, and after a while this becomes > 30GB/31GB and eventually failures start. I had to take a way a bit of memory from both the offheap bucketcache and a bit of the regionserver's memory. Then the process starts with 17GB/31GB used, and after writing for an hour it maxes at about 27GB, but the failure was not observed anymore. The reason I was trying to use a much of the memory as possible is that when reading, I would like to have the best performance. Then making use of all resources, does not lead to errors. While writing however it does. Lesson learned: when going from a period that is write-intensive to a period that is read-intensive, it could be recommended to change the hbase config. Hope this can help others! PS: although the reply of @smdas was of very high quality and lead me to many new insights , I believe the explanation above in the current post should be marked as the solution. I sincerely want to thank you for your contribution, as your comments in combination with the current answer, will help others in the future.
... View more
09-30-2020
09:02 AM
Thank you for verifying!
... View more
08-22-2019
09:03 AM
The directory /benchmarks is owned as "hdfs:superuser" If you run the benchmark job with user hdfs, it wont run as hdfs is a banned user in Yarn configuration for container executor snippet. what you need to do is to change the permission of "/benchmarks" as your custom user using hdfs token. Then you would be able to run the benchmarks job without any issue. ------ hadoop fs -chown -R user02:user02 /benchmarks hadoop fs -ls /benchmarks Found 1 items drwxr-xr-x - user02 user02 0 2019-08-22 15:55 /benchmarks/TestDFSIO ------
... View more
07-26-2019
07:25 AM
Hi were we able to get to the root cause of this issue? Are you getting any error in log?
... View more
01-03-2019
08:55 AM
OK, I'm back to confirm that the documentation on Cloudera and Apache sites are not blocked from the exam environment.
... View more
10-09-2018
08:50 AM
A kudu table
... View more