Member since
01-21-2016
290
Posts
76
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3229 | 01-09-2017 11:00 AM | |
1304 | 12-15-2016 09:48 AM | |
5609 | 02-03-2016 07:00 AM |
11-07-2016
04:00 PM
@ARUN please keep in mind that by setting this property you're giving load balancer algorithm limited information about the load on your cluster. It will impact it's ability to balance the regions.
... View more
11-11-2016
07:57 AM
4 Kudos
@arunpoy hbase reference guide also has good coverage on this topic with example https://hbase.apache.org/book.html#ops.snapshots
... View more
10-26-2016
02:29 PM
2 Kudos
@ARUN It's very high The Hadoop RPC server consists of a single RPC queue per port and multiple handler (worker) threads that dequeue and process requests. If the number of handlers is insufficient, then the RPC queue starts building up and eventually overflows. You may start seeing task failures and eventually job failures and unhappy users. It is recommended that the RPC handler count is set to 20 * log2(Cluster Size) with an upper limit of 200. e.g. for a 64 node cluster you should initialize this to 20 * log2(64) = 120. The RPC handler count can be configured with the following setting in hdfs-site.xml <property>
<name>dfs.namenode.handler.count</name>
<value>120</value>
</property> This heuristic is from the excellent Hadoop Operations book. If you are using Ambari to manage your cluster this setting can be changed via a slider in the Ambari Server Web UI. Link. Hope this helps your.
... View more
10-14-2016
02:39 PM
1 Kudo
All the slides are here: http://hadoopsummit.org/melbourne/agenda/ https://www.youtube.com/channel/UCAPa-K_rhylDZAUHVxqqsRA
... View more
10-18-2016
02:45 PM
That is not a good idea. It is not well tested as to how the version of Phoenix provided in HDP2.3/2.4 works with Apache Phoenix 4.8. You are likely on your own there 🙂
... View more
10-12-2016
04:22 PM
Check for: 1. JVM GC pauses. If the JVM is doing a stop-the-world garbage collection, it will cause the server to become disconnected from ZK, and likely lose its session. Read the lines in the HBase service log prior to this error. 2. Errors in the ZooKeeper log about maxClientCnxns (https://community.hortonworks.com/articles/51191/understanding-apache-zookeeper-connection-rate-lim.html) 3. Ensure operation system swappiness is reduced from the default (often 30 or 60), to a value of 0. You can inspect this via `cat /proc/sys/vm/swappiness`.
... View more
09-30-2016
08:35 PM
@ARUN As @Enis clarified, there will be no impact on your start/stop. If you have custom scripts then you may want fix those files to show all slaves. I have no explanation yet of why the files in the existent slave nodes did not get updated, but you are safe for start/stop.
... View more
09-29-2016
01:39 PM
@ARUN Yes, you can use node labels and queues together. Here is some documentation regarding that: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.6/bk_yarn_resource_mgt/content/configuring_node_labels.html
... View more
01-30-2018
03:34 PM
In this if we copy the Hfiles manually from one Hbase cluster to another, in that case list command dispalys all the tables, But scanning a table does not shown any data. This is because i have not copied META table enteries. So is there a way to copy META table enteries to another hbase instance in a way that both the already exisiting table and new tables exist with their data. , If we manually copy Hfiles from Hbase instance to another. In that case, list will display the tables but scan will not show the data because we have not copied META table. So is there a way to copy the enteries of MEta table also, in a way that new tables and already exisiting tables both are retained.
... View more
09-12-2016
06:20 AM
2 Kudos
this might be helpful for you. https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
... View more