Member since
07-05-2018
13
Posts
1
Kudos Received
0
Solutions
11-06-2018
03:45 PM
We had the same error. Additional to the answer above we did change specific the following properties: hbase-site.xml hbase.regionserver.handler.count from 30 to 40 phoenix.regionserver.index.handler.count from 30 to 40 hbase-env add (at export HBASE_REGIONSERVER_OPTS) -XX:ParallelGCThreads=8 ---------------- Additional you can increase the hbase.ipc.server.max.callqueue.size (default 1GB) Regards, Michael
... View more
10-25-2018
07:55 AM
You can run major compaction manually by running the following commands: hbase shell
major_compact 'TABLE_NAME'
You can also configure that compaction runs automatically by adding this properties in hbase-site.xml: hbase.regionserver.compaction.enabled
hbase.hregion.majorcompaction
hbase.hregion.majorcompaction.jitter
hbase.hstore.compactionThreshold
You can find more informations here: https://hbase.apache.org/book.html#_enabling But be careful, do only major compaction if all region are assigend. No Region should be in RIT (Region in Transition). Also major compaction is a heavyweight operation. So you should run it, when the cluster load is low. You can monitor the compaction in the HBase Master UI. Regards, Michael
... View more
09-14-2018
03:06 AM
Hello, we want to implement HA for Livy. We use livy a lot in combination with Zeppelin. I already installed a second Livy for Spark2 Server over Ambari. In Zeppelin there is a property zeppelin.livy.url, which contains the URL of the Livy Server. Now with HA we have two running Livy Servers. How can i set both URL's of the Livy Server in that property, to have a automatic failover when one server crashes? Is that possilbe? I already tried to use the delimiter ',' and ';' between the URL's. For example: zeppelin.livy.url=http://livyserver1:8999,http://livyserver2:8999 Regards, Michael
... View more
Labels:
- Labels:
-
Apache Zeppelin
07-24-2018
05:59 AM
Hello, if i kill a oozie coordinator with the command below, the workflows of the coordinator which are running at this moment also gets killed. oozie job -oozie http://<ooziehost>:<port>/oozie -kill xxxxxxx-xxxxxxxxxxxxxx-oozie-oozi-C Is there a way to kill a coordinator and let the coordinator workflows which are in RUNNING mode at this time run to end? Background is that the workflows are ELT processes, including recieving messages of IBM MQ, do sqoop jobs and transform the data. The error handling if a workflow suddenly get killed is quite difficult. Thanks, Michael
... View more
Labels:
- Labels:
-
Apache Oozie
07-11-2018
11:23 AM
Hello, my Question is which Users are in the "public" group of Ranger? Is this group a wildcard only for all service users in the cluster (e.g. hive, spark ...) or is this group a wildcard for all users in the cluster, including the self made users? In this documnetation i found the part: What is the “public” group? Where did it come from? This is an internal group that serves as a wildcard for all system users. For me it's not completly clear what "system" users in this context means. Thanks, Michael
... View more
Labels:
- Labels:
-
Apache Ranger