Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2129 | 07-09-2019 12:53 AM | |
| 12448 | 06-23-2019 08:37 PM | |
| 9560 | 06-18-2019 11:28 PM | |
| 10525 | 05-23-2019 08:46 PM | |
| 4895 | 05-20-2019 01:14 AM |
03-26-2018
06:48 PM
I am not sure what 'FA' means in this context, but CM APIs offer license information via the following REST end-points: http://cloudera.github.io/cm_api/apidocs/v18/path__cm_license.html http://cloudera.github.io/cm_api/apidocs/v18/path__cm_licensedFeatureUsage.html An example query: # For licensee details and UIDs: ~> curl -u admin http://cm-hostname.organization.com:7180/api/v18/cm/license … { "owner" : "Licensee Name", "uuid" : "12345678-abcd-1234-abcd-1234abcd1234" } # For license usage details: ~> curl -u admin http://cm-hostname.organization.com:7180/api/v18/cm/licensedFeatureUsage … { "totals" : { "Core" : 8, "HBase" : 8, "Impala" : 8, "Search" : 5, "Spark" : 3, "Accumulo" : 4, "Navigator" : 8 }, "clusters" : { "Cluster 1" : { "Core" : 4, "HBase" : 4, "Impala" : 4, "Search" : 4, "Spark" : 2, "Accumulo" : 4, "Navigator" : 4 }, "Cluster 2" : { "Core" : 4, "HBase" : 4, "Impala" : 4, "Search" : 1, "Spark" : 1, "Accumulo" : 0, "Navigator" : 4 } } }
... View more
03-26-2018
10:01 AM
I Found the jar file was in this directory: [cloudera@quickstart lib]$ find ./ -name hive-contrib.jar ./hive/lib/hive-contrib.jar ./oozie/oozie-sharelib-mr1/lib/hive/hive-contrib.jar ./oozie/oozie-sharelib-yarn/lib/hive/hive-contrib.jar [cloudera@quickstart lib]$ pwd /usr/lib/hive/lib [cloudera@quickstart lib]$ ls -ltr hive-contrib.jar lrwxrwxrwx 1 root root 32 Oct 23 09:59 hive-contrib.jar -> hive-contrib-1.1.0-cdh5.13.0.jar [cloudera@quickstart lib]$ then used the same ADD JAR /usr/lib/hive/lib/hive-contrib.jar and it worked.
... View more
03-23-2018
03:20 AM
Thank you ,that works!!
... View more
03-19-2018
10:09 AM
Thanks Harsh. I have set Topic Whitelist { '|' }. Kafka started working.
... View more
03-17-2018
03:40 AM
1 Kudo
A bit of info: - total_read_requests_rate_across_regionservers tracks the RS JMX bean of Server::readRequestCount - total_write_requests_rate_across_regionservers tracks the RS JMX bean of Server::writeRequestCount - total_requests_rate_across_regionservers tracks the RS JMX bean of Server::totalRequestCount The first two apply only to RS operations that operate on data, but the third applies also to other meta-operations such as openRegion, closeRegion, etc. that the RegionServer services (for Master and other commanding clients). > Which metric reflects the actual load of the HBase cluster? Data-wise its the read/write requests you want to look at. > Given the names I was expecting something like: total_requests = total_read_requests + total_write_requests but this is clearly not the case. The readRequestCount tracks only read operations (get/scan), where it also counts up multiple rows counted during scans. The totalRequestCount only counts by one per RPC done to RS, not per-row of reads done. This would cause a difference between the three metrics. Hope this helps explain what these three metrics truly are. TL;DR: total_read_requests_rate_across_regionservers -> Read operations count rate, counted per row scanned total_write_requests_rate_across_regionservers -> Write operations count rate, counted per row written total_requests_rate_across_regionservers -> Overall RS RPC-level call count rate, counted per request made to RS, not row-level
... View more
03-16-2018
10:52 PM
The command is only for non-Cloudera-Manager deployments like the documentation notes: """ In non-managed deployments, you can start a Lily HBase Indexer Daemon manually on the local host with the following command: sudo service hbase-solr-indexer restart """ If you use Cloudera Manager then just add a new Service from the Clusters page of the type "Key-Value Store Indexer" shown in the new service list. Then proceed with configuring it from CM and starting it.
... View more
03-16-2018
10:38 PM
It appears that your HMaster is crashing out during startup. Take a look at the HMaster log file under /var/log/hbase/ to investigate why. If you are able to run the configured ZK properly, check if the /hbase znode appears on it.
... View more
03-01-2018
01:45 AM
You've mentioned the RAM of the machine your DataNode is assigned to run on, however what is your configured DataNode Java JVM heap size? You could try raising it by 1 GB from its current value, to resolve this. Also, what's the entire Out of Memory message, 'cause "unable to create a new native thread" (or summat) is entirely different than "Java heap space" in what it implies (nproc limit issue vs. actual heap memory exhaustion).
... View more
02-28-2018
01:34 AM
Thanks... Following the cloudera Doc's I was able to sucessfully setup Cross-realm trust. Issue is with DNS.
... View more