Member since
03-01-2016
104
Posts
97
Kudos Received
3
Solutions
12-03-2020
01:37 AM
Hi HDP 3.1 use hbase 2.0.2, but HBCK2 tool requires minimum HBase 2.0.3,How can I use it。 cluster :HDP3.1.0 on Kerberos error:
... View more
07-14-2020
02:47 PM
@Thirupathi These articles were written keeping HDP 2.6.x versions in mind. With HDP 3 and CDH6 having Phoenix 5.0 , many issues have been resolved. But I cannot comment on case to case basis here. You will need to log a support ticket for more comprehensive discussion on specific JIRA basis.
... View more
11-22-2019
04:44 AM
I have the same set of questions 1. How do I take znode back ups? is there a way ? 2. rmr /hbase-secure from zkcli and restarting hbase services , should essentially built me all the znode tree structure back. is my assumption right?
... View more
11-17-2018
01:22 AM
3 Kudos
HBase, Phoenix And Ranger In Part 1, Part 2 and Part3 of this article series , we discussed internals of Phoenix Index maintenance and major issues hit around this feature. In this article we will discuss about Phoenix - Ranger relationship , how it works and what had broken until recently which caused reporting of several issues. How native HBase authorization work: ACLs in HBase are implemented as a coprocessor called AccessController. (hbase.security.authorization=true). Users are granted specific permissions such as Read, Write, Execute, Create, Admin against resources such as global, namespaces, tables, cells, or endpoints. ( all self explanatory) There is an additional user called “Superuser”. Superusers can perform any operation available in HBase, to any resource. The user who runs HBase on your cluster is a superuser, as are any principals assigned to the configuration property hbase.superuser in hbase-site.xml. Much more details on this subject are here How things Changed with Ranger HBase plugin enabled ? Once Ranger is involved, one can create policies for HBase from Ranger Policy Manager or via Grant / Revoke commands from HBase shell only. These Grant / Revoke commands are mapped to ranger policies and Ranger intercepting appropriate commands from hbase shell adds or edits ranger policies according to user/group and resource information provided in command. And of course, the user running these commands must be an admin user. It has been seen that using grant / revoke commands which are mapped with Ranger create multiple issues and creation of redundant or conflicting policies. Thus we have an option to disable this feature completely and allow use of only Ranger Policy Manager to manage permissions. You can disable the command route by setting following parameter in Ranger configs (ranger-hbase-security.xml): <property> <name>xasecure.hbase.update.xapolicies.on.grant.revoke</name>
<value>false</value>
<description> Should HBase plugin update Ranger policies for updates to permissions done using GRANT/REVOKE? </description> </property> How it works in Phoenix with Ranger: Simply put, having a Phoenix table means an existence of HBase table as well and therefore any permissions required to access that HBase table are also required for this Phoenix table. But this not a complete truth, Phoenix has something called as SYSTEM tables which manage table metadata, and thus users also need to have sufficient permissions on these system tables to be able to login to Phoenix shell, view, create, delete tables etc. By design, only the first ever user connecting to Phoenix needs the CREATE permission on all SYSTEM tables. This is a first-time operation so that system tables get created if not created already. For every other time, regular users should require READ on the system tables. For users requiring to create tables in Phoenix would need WRITE as well. But this functionality broke due to PHOENIX-3652 (partly fixed in HDP 2.6.1) and other ranger level complexities and due to this Phoenix expected full permissions on system tables. Users observed any of the following exceptions either during phoenix shell launch or during any DDL operation: Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=test@HWX.COM scope=SYSTEM:CATALOG, family=0:SALT_BUCKETS, params=[table=SYSTEM:CATALOG,family=0:SALT_BUCKETS],action=WRITE) OR Error: org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'test@EXAMPLE.COM' (action=admin) To get this working temporarily, users created a policy in ranger and gave all access to these system tables as follows:
Table : SYSTEM.* Column Family : * Column : * Groups : public Permissions : Read, Write, Create, Admin Now this was all good in an ideal world, but in real world it raises lot of security issues, customers do not want users to have all access on these system tables due to the obvious fear of manipulation on user tables and their metadata. To take care of this concern, our developers started working on PHOENIX-4198 (fix available with HDP 2.6.3) where there would be a need for giving only RX permissions on SYSTEM.CATALOG table and rest of the authorization part would be done by a coprocessor endpoint querying either Ranger or native HBase ACLs appropriately. Important to know that this feature does not support working with Ranger yet. (Work In Progress) However, above feature was specifically designed for system.catalog and users reported issues for system.stats as well where write permissions to users were required in order to drop a table. This has been reported in PHOENIX-4753 and the issue is still unresolved. You may see following exceptions: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=user01t01@EXAMPLE.COM, scope=SYSTEM:STATS, family=0:, params=[table=SYSTEM:STATS,family=0:],action=WRITE) Here again, workaround would be to give this user or group a write permission to system.stats. grant '@group', 'RWX' , 'SYSTEM:STATS' Also See : Part 1, Part 2 , Part3
... View more
Labels:
11-17-2018
12:22 AM
5 Kudos
Issues with Global Indexes In Part 1 of this article series, we discussed the internals of Index maintenance, in this part we will cover some of the major issues we face during the life cycle of Index maintenance. Before we get into issues, we need to understand various “states” of Index table which reflect its health in general. BUILDING("b") : This will partially rebuild the index from the last disabled timestamp UNUSABLE (“d”) / INACTIVE ("i") : This will cause the index to no longer be considered for use in queries, however index maintenance will continue to be performed. ACTIVE("a") : Index ready to use and updated. DISABLE("x") : This will cause the no further index maintenance to be performed on the index and it will no longer be considered for use in queries. REBUILD("r") : This will completely rebuild the index and upon completion will enable the index to be used in queries again. --- What happens when an Index update fails for any reason : The answer is not straight as there are choices of implementations here based on use case or table types. Following are two choices we have: Choice 1: Block writes to data table but let Index continue to serve read requests. Maintain a point of “consistency” in the form of a timestamp just before failure occurred. Keep the write block until Index table is rebuilt in the background and gets in sync with data table again. Properties involved are: phoenix.index.failure.block.write=true phoenix.index.failure.handling.rebuild=true This option is not yet available in HDP 2 but available with HDP 3.0 Choice 2: Writes to the data table are not stopped but the index table in question is disabled to be detected by rebuilder threads (pushed from system.catalog hosting server ), converted as “inactive” and partially rebuilt again. In this mode, Index table will not serve any requests to clients.This is the implementation we are using with HDP 2 . Properties involved are: phoenix.index.failure.handling.rebuild=true phoenix.index.failure.handling.rebuild.interval=10000 (or 10 seconds, interval in which server checks if any index table needs partial rebuild ) phoenix.index.failure.handling.rebuild.overlap.time=1 (time to go back before index_disable_timestamp to be able to rebuild from that point) ---- Few Scenarios for Troubleshooting issues: There are various scenarios which could help us gain more insight into how Index maintenance, update and failure handling is done in Phoenix. (and we will only talk about choice 2 above) Scenario 1: Index update is written to WAL and before being written to data or Index table region server hosting data table crashes. WAL is replayed and Index updates are committed via server-to-server RPC Scenario 2 : Data table is written however server-to-server RPC to Index table fails This is where state of Index table will change to disabled. A rebuilder thread in server hosting system.catalog table keeps checking on these index states, as soon as it detects a “disabled” index table, it starts rebuild process by first marking this table as “Inactive” , then running rebuild scan on data table regions and finally making index updates via server to server RPCs. Client Queries during this time only refer data table. Here it's good to know about “INDEX_DISABLE_TIMESTAMP” , It is the timestamp at which index got disabled. It will be 0 , if the index is active or disabled by client manually and will be non-zero if index is disabled during write failures. Thus rebuild will only happen after disabled timestamp updates. One can use following query to check the value of this column: select TABLE_NAME, cast(INDEX_DISABLE_TIMESTAMP as timestamp) from SYSTEM.CATALOG where index_state is not null limit 10;
+------------------------+----------------------------------------
+ | TABLE_NAME | TO_TIMESTAMP(INDEX_DISABLE_TIMESTAMP) |
+------------------------+----------------------------------------
+ | TEST_INDEX_PERF | 2018-05-26 10:28:54.079 |
| TEST1_INDEX_PERF | 2018-05-26 10:28:54.079
| +------------------------+----------------------------------------
+ 2 rows selected (0.089 seconds) Once rebuild completes in background, Index table’s state changes back to “active”. All this while data table keeps serving read and write requests. Scenario 3 : Index went into disabled state, HBase became unresponsive, handlers are saturated (verified from Grafana), Queries are dead slow and nothing is moving. Let's break this down into a sequence of most probable events:
Multiple Client writing to region server 1 (data table) using all of the default handlers. Now there are no handlers left on region server 1 to write the index update to region server 2 which hosts Index table regions. Since index update is not written on RS2, client RPC on RS1 does not free up (and if situation continues, times out after hbase.rpc.timeout ) Because Index update failed, Index table goes into disabled state. Rebuilder threads detect disabled state of Index and start rebuilding this table subsequently contesting for the same default handler pool aggravating this situation further. This is a very common “deadlock” scenario and users struggle to find what caused all these issues and where to start fixing them. In computer science, this situation is also known as “dining philosophers problem”. Above sequence of events could cause some or all of the possible issues:
queries getting hung or timed out region servers getting unresponsive clients unable to login to phoenix shell long GC pauses (due to large number of objects creation ) Point “4” above would eventually break the session with zookeeper and may bring the region server down. What is the solution to this problem ? Since we had a common pool of default handlers for client and servers both which caused these issues, it was decided to create a dedicated Index handler pool and a custom RPC scheduler for the same. Also add custom RPC controller to the chain of controllers. This would filter outgoing index RPCs and tag them for higher priority. Following parameters were expected be added for the same (already part of HDP 2.6): <property> <name>hbase.region.server.rpc.scheduler.factory.class</name>
<value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
</property><property><name>hbase.rpc.controllerfactory.class</name>
<value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value></property> However, there was another issue introduced (PHOENIX-3360, PHOENIX-3994) due to these added parameters. Since clients also shared the same hbase-site.xml with these additional parameters , they started sending normal requests tagged with index priority. Similarly Index rebuild scans also sent their RPCs tagged with index priority and using Index handler pool which is not what it was designed for and led many users to another “deadlock” situation where index writes would fail because most index handlers are busy doing rebuild scans or being used by clients. The fix to PHOENIX-3994 (part of HDP 2.6.5) would remove dependencies on these parameters for index priority and hence these parameters would neither be needed at server side nor at client side. However Ambari still adds these parameters and could still create issues. A quick heck would be to remove these two properties from all the client side hbase-site.xml files. For clients such as NIFI which source hbase-site.xml from phoenix-client jars, it would be good to zip the updated hbase-site.xml in the jar itself. If you have many or large Index tables which require substantial number of RPCs, you can also define “phoenix.rpc.index.handler.count” in custom hbase-site.xml and give it an appropriate value proportional to the total handler counts you have defined. We will discuss couple more scenarios in Part 3 of this article series. Also See: Part 1, Part4
... View more
Labels:
08-21-2018
10:35 PM
Phoenix shipped with HDP does not support import from Sqoop yet.
... View more
04-15-2018
12:02 PM
8 Kudos
In this article series, part 1 , part 2 , part 3, part 4 covered various Hbase tuning parameters, scenarios, system side of things etc, in last and part 5 of this series, I will discuss little bit about Phoenix performance parameters and general tips for tuning.
I am taking an example of a query which was performing very slow and how we investigated this situation. We will start by reading explain plan of this query. I cannot quote exact query (customer's data) here but it was a select query with some where clause and finally order by conditions. The explain plan of the query is as follows:
+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| CLIENT 5853-CHUNK 234762256 ROWS 1671023360974 BYTES PARALLEL 5853-WAY FULL SCAN OVER MESSAGE_LOGS.TEST_MESSAGE | | SERVER FILTER BY ((ROUTING IS NULL OR ROUTING = ‘TEST') AND TESTNAME = 'createTest' AND TESTID = ’TEST’ AND ERRORCODE | | SERVER TOP 20 ROWS SORTED BY [CREATEDTIMESTAMP DESC, TESTID] | | CLIENT MERGE SORT |
+--------------------------------------------------------------------------------------------------------------------------------------------------------+
4 rows selected (0.958 seconds)
Firstly, lets learn to dissect an explain plan of a Phoenix query, following are my observations looking at this plan:
First statement
”CLIENT” , means this statement shall be executed at client side.
“5853-CHUNK”, this means query plan has logically divided the data in about 5853 chunks . And each chunk would utilize one single thread. So for future reference, lets keep it in mind, one chunk == one thread of client thread-pool.
“234762256 ROWS”, means these many rows will be processed by this query. Self explanatory.
“1671023360974 BYTES”, means about 1.6 TB of data will be processed.
“PARALLEL”, so the query processing on these 5853 chunks (5853-WAY) would be done in parallel.
“FULL SCAN OVER MESSAGE_LOGS.TEST_MESSAGE“ , this means it will scan entire table , most inefficient way and anti-pattern for Hbase / Phoenix use cases. This table requires secondary index to convert this full scan into a range scan.
Second Statement
“SERVER” , processing would happen in region servers
“FILTER BY” , returns only results that match the expression
Third Statement
“SERVER” , processing happening on server side, specifically “SORTED BY”
Fourth Statement
“CLIENT MERGE SORT”, meaning all the SORTED ROWS at server side would be brought back to client node and be merge sorted again.
What tuning was done to make query run faster ?
5853 chunks appeared too much for the query specially with a thread-pool having at default value of 128, this was making whole query get slower as only 128 threads would work at a time and rest all tasks would wait in queue. (phoenix.query.queueSize)
We decided to bump up thread-pool (phoenix.query.threadPoolSize) from default 128 to about 1000 , but customer did not have enough CPU cores on client side and he feared CPU contention there if we go beyond this number, so we decided to go for another tuning.
We increased guidepost width (phoenix.stats.guidepost.width) which are markers to logically distribute data in chunks. (from its default 100 MB to 500 MB ). This effectively reduced the number of chunks and hence the threads.
Read more about all tuning parameters including above ones here.
For making this query more effective, recommended customer to create secondary index on top of this data table and include most frequently used columns in it. Read more about secondary index here.
Thus after all the changes in place, the query which was earlier taking about 5 minutes was now taking about 15 - 20 seconds.
Tuning recommendations in general :
For improving read performance, create global secondary index, it will have some write penalty as data of chosen columns for index would be duplicated to another table.
For improving write performance, pre-split the table if you know the key ranges , also consider going for Local index which is written in same table and being added as another column. Local indexes will be more stable with HDP 3.0 with lot of bug fixes.
Choose most frequently used columns for primary key. Since all these columns are concatenated to form Hbase’s “row key” ,their order of appearance in row-key as well as its length matters. Order matters because if most frequently used column comes first in row key the range scan becomes more efficient. Length matters because this row key will be part of each cell and hence would occupy some memory and some disk.
Use Salt Buckets if you have a monotonically increasing row-key . Read more about it here
Please note Salting would incur read penalties as scans would be repeated for each bucket.
Don’t create too many salt buckets , thumb rule is to be equal to number of region servers in your cluster.
Reference:
http://phoenix.apache.org/explainplan.html
https://phoenix.apache.org/tuning.html
https://phoenix.apache.org/tuning_guide.html
Also see : PART 1 , PART 2 , PART 3, PART 4
... View more
Labels:
04-15-2018
11:36 AM
11 Kudos
In my articles part 1 and part 2 , I explained various parameters which could be tuned for achieving optimized performance from Hbase. In part 3, we discussed some scenarios and aspects to focus while investigating performance issues. Continuing this series, part 4 covers some system and network level investigations.
DISKS
Along with investigating potential issues at HBASE and HDFS layer, we must not ignore system side of thingslike OS, network and disks. We see several cases everyday when severe issues at this layer are identified.Detailed investigation is beyond the scope of this article , but we should know where to point fingers. The triggering factor to look at system side are messages such as following in datanode logs at the time of performance issue.
WARN datanode.DataNode (BlockReceiver.java:receivePacket(694)) - Slow BlockReceiver write data to disk cost:317ms (threshold=300ms)
Following are some of several tests we could do to ascertain disk performance:
- Run dd test to check read and write throughput and latencies.
For checking write throughput:
dd bs=1M count=10000 if=/dev/zero of=/data01/test.img conv=fdatasync
For checking read throughput:
dd if=/data01/test.img of=/dev/null bs=1M count=10000
Where /data is one of your data node data disk.
- For checking latencies either during read or write, prepend “time” command before above commands and you will know how much time it took to complete these operations and also if the actual delays were from user side or system side. Compare these results with the agreed upon throughputs with your storage vendor / Cloud service providers.
- Another important tool is Linux “ iostat” command which provides great deal of advanced information to diagnose such as how much time an IO request was in IO scheduler queue, disk controller queue, how many requests were waiting in queues, how much time disk took to complete an IO operation etc.
- This command could very well explain if your work load is way beyond your disk capacities or if your disks have issues either at hardware or driver / firmware level.
Another detailed article could be written to explain each and every parameter specified in this command but that’s beyond the scope of this article, some parameters though need highlighted:
A. Await: Covers the time that is taken through scheduler, driver, controller, transport (for example fibre san), and storage needed to complete each IO. Await is the average time, in milliseconds, for I/O requests completed by storage and includes the time spent by the requests in the scheduler queue and time spent by storage servicing them.
B.. avgqu-sz : the average number of IO queuedwithin both the IO scheduler queue and storage controller queue.
C. Svctm : Actual service time storage / disk took to serve IO request excluding all queue latencies.
D. Util : Percentage utilization of each disk.
- Needless to say, you would always check commands like top / vmstat/mpstat for identifying issues related to CPU / Memory / Swapping etc.
- Last but most important command to see live streaming of whats happening at your IO layer is “iotop” command.This command would give you real time details of which command , process and user is actually clogging your disks.
Some general tuning tips :
- Selecting right IO scheduler is very critical to latency sensitive work loads. “deadline” scheduler is proven to be the best scheduler for such use cases.Check and correct which scheduler you are getting your IO processed with:
[root@example.com hbase]# cat /sys/block/sd*/queue/scheduler
noop [deadline] cfq
- Choose right mount options for your data disks. Options such as “noatime” saves a great deal of IO overhead on data disks and in turn improve their performance.
- Check which mode your CPU cores are running in. We recommend them to run in performance mode. Virtual machines and cloud instances may not have this file.
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
- Flushing of large pool of accumulated “dirty pages” to disks has been seen to be causing a significant IO overhead on systems. Please tune the following kernel parameters controlling this behavior. There is no single number which is suited for all and trial and error is a best resort here, but with systems having large amount of memory, we can keep this ratio smaller than their default values so that we dont end up accumulating a huge pool of dirty pages in memory eventually to be burst synced to disks with limited capacities and degrading application performance.
vm.dirty_background_ratio (default is 10)
vm.dirty_ratio (default is 40 )
Read more about this parameter here :
NETWORK
Network bandwith across the nodes of a HDP cluster plays critical role in heavy read / write use cases. It becomes further critical in distributed computing because any one limping node is capable of degrading entire cluster performance. While we are already in the world of gigabit networks and generally things are stable at this front. However we continue to see issues this side, messages such as the following seen in datanode logs could prove to be the triggering factor to investigate network side of things:
WARN datanode.DataNode (BlockReceiver.java:receivePacket(571)) - Slow BlockReceiver write packet to mirror took 319ms (threshold=300ms)
Following are some of the tools / commands we can use to find out if something is wrong here:
- “iperf” to test network bandwidth between nodes. See more details about iperf here.
- Use Linux commands like ping / ifconfig and “netstat -s” to find out there are any significant packet drops / socket buffer overruns and if this number is increasing over time.
-ethtool ethX command would help you provide negotiated network bandwidth.
- ethtool -S would help collect NIC and driver statistics.
Some general tuning tips:
- Generally, any sort of NIC level receive acceleration does not work well with our use cases and in turn prove to be a performance bottleneck in most scenarios. Disable any acceleration enabled on your NIC cards ( of course after consultation with your platform teams) :
- Check if receive offloading is enabled:
$ grep 'receive-offload' sos_commands/networking/ethtool_-k_eth0 | grep ': on'
generic-receive-offload: on
large-receive-offload: on
- Disable them using following commands:
# ethtool -K eth0 gro off
# ethtool -K eth0 lro off
Referenece
- Make sure MTU size is uniform across all nodes and switches in your network.
- Increase socket buffer sizes if you observe consistent overruns / prunes / collapses of packets as explained above. Consult your network and platform teams as how to tweak these values.
KERNEL / MEMORY
A tuned kernel is a mandatory requirement for the nature of work load you are expecting any node to process. However it has been seen that kernel tuning is often ignored at the time of design of such infrastructures. Although its a very vast topic to cover and is beyond the scope of this article, I will mention some important kernel parameters related to memory management which must be tuned on HDP cluster nodes. These configuration parameters stay in /etc/sysctl.conf
- vm.min_free_kbytes: Kernel tries to ensure that min_free_kbytes of memory is always available on the system. To achieve this the kernel will reclaim memory. Keeping this parameter to be about 2 - 5 % of total memory on node makes sure that your applications do not suffer due to prevailing memory fragmentation.
The first symptom of memory fragmentation is the appearance of message such as “Page Allocation Failures” in /var/log/messages or “dmesg” (kernel ring buffer) or worst, when kernel starts killing processes to free up memory by “OOM Killer”.
- vm.swappiness : This is a parameter reflecting tendency of a system to swap. Default value is 60. We dont want system to swap on its will , so keep this value to about 0 - 5 to keep system's swap tendencies to be minimal.
- It has been seen that transparent hugepages do not work well with the kind of work load we have. Its thus recommended to disable THP on our cluster nodes.
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
- On modern NUMA (read here about NUMA ) systems , we strongly recommend to disable zone reclaim mode. This is based on understanding that performance penalties incurred due to reclaiming pages from within a zone are far worse than having the requested page served from another zone. Usually applications strongly dependent on using cache prefer having this parameter disabled.
vm.zone_reclaim_mode = 0
All these kernel level changes can be made on running systems by editing /etc/sysctl.conf and running sysctl -p command to bring them into effect.
In last part (PART 5) of this article series , I will discuss Phoenix performance tuning in details.
Also see : PART 1 , PART 2 , PART 3
... View more
Labels:
04-15-2018
10:15 AM
13 Kudos
As we understood important tuning parameters of Hbase in part 1 and part 2 of this article series, this article focuses on various areas which should be investigated when handling any Hbase performance issue.
Locality
By locality we mean the physical HDFS blocks related to Hbase Hfiles need to be local to the region server node where this respective region is online. This locality is important because Hbase prefers to use short circuit reads directly from physical disks bypassing HDFS. If a region’s Hfiles are not local to it, it will incur cost to a read latency by doing reads across the node over network.
One can monitor locality of a table / region from Hbase Master UI specifically on table’s page by clicking on the listed table name. The value of “1”on the locality column means 100% block locality. Overall Hbase locality is visible on Ambari metrics section of Hbase ( in percentage).
Here, Major compaction tries to bring back all Hfiles related to a region on a single region server thus restoring locality to a great extent.
Locality is generally messed up due to balancer run by HDFS which tries to balance disk space across data nodes OR by Hbase balancer which tries to move regions across region server nodes to balance the number of regions on each server.
Hbase balancer (default is Stochastic Load Balancer ) can be tuned by tweaking various costs ( region load, table load, data locality, MemStore sizes, store file sizes) associated with it and have it run according to our requirements, for example , to have balancer prefer Locality cost more than anything else , we can add following parameter in hbase configs and give it a higher value. (Default value is 25).
( an advanced parameter and an expert must be consulted before such an addition.)
hbase.master.balancer.stochastic.localityCost
To overcome locality harm done by HDFS balancer we have no solution as of date except running compaction immediately after HDFS balancer is run. There are some unfinished JIRAs which once implemented would bring in features like block pinning and favored nodes , once they are available Hbase can configure its favored nodes and writes would be dedicated only to those nodes and HDFS balancer won’t be able to touch its respective blocks. (Refer HBASE-15531 to see all unfinished work on this feature )
Hotspotting
Hotspotting has been discussed quiet a lot but its important to mention it here as its a very crucial aspect to be investigated during performance issues. Basically hotspotting appears when all your write traffic is hitting only on a particular region server. And this might have happened because of the row key design which might be sequential in nature and due to that all writes get landed to a node which has this hot spotted region online.
We can come over this problem using three ways (I mean I know three ways) :
Use random keys - Not so ideal solution as it would not help in range scans with start and stop keys
Use Salt buckets - If you have Phoenix tables on top of hbase tables, use this feature. Read more about this here
Use Pre-splitting - If you know the start and end keys of your sequential keys, you can pre split the table by giving split key points beforehand at the time of creation. This would distribute empty regions across nodes and whenever writes come on for a particular key it would get landed to respective node eventually distributing the write traffic across nodes.Read more about here.
HDFS
HDFS layer is very important layer as no matter how optimized your Hbase is, if datanodes are not responding as expected you would not get performance as expected. Anytime you have latencies on Hbase / Phoenix queries and you observe following messages in region server logs in a large number :
2018-03-29 13:49:20,903 INFO [regionserver/example.com/10.9.2.35:16020] wal.FSHLog: Slow sync cost: 18546 ms, current pipeline: [xyz]
OR
Following messages in datanodes logs at the time of query run:
2018-04-12 08:22:51,910 WARN datanode.DataNode (BlockReceiver.java:receivePacket(571)) - Slow BlockReceiver write packet to mirror took 34548ms (threshold=300ms)
OR
2018-04-12 09:20:57,423 WARN datanode.DataNode (BlockReceiver.java:receivePacket(703)) - Slow BlockReceiver write data to disk cost:3440 ms (threshold=300ms)
If you see such messages in your logs, Its time to investigate things from HDFS side such as if we have sufficient datanode transfer threads , heap , file descriptors, checking logs further to see if there are any GC or Non GC pauses etc. Once confirmed on HDFS side we must also look at underlying infrastructure side (network, disk, OS). This is because these messages mostly convey that HDFS is having hard time receiving / transferring block from/to another node or to sync the data to disk. We will discuss about system side of things in part 4 of this article series.
BlockCache Utilization and hitRatio
When investigating performance issues for read traffic, its worth checking how much your Block Cache and Bucket cache are helpful and whether they are getting utilized or not.
2017-06-12 19:01:48,453 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=33.67 MB, freeSize=31.33 GB, max=31.36 GB, blockCount=7, accesses=4626646, hits=4626646, hitRatio=100.00%, , cachingAccesses=4625995, cachingHits=4625995, cachingHitsRatio=100.00%, evictions=24749, evicted=662, evictedPerRun=0.026748554781079292
2017-06-12 19:02:07,429 INFO [BucketCacheStatsExecutor] bucket.BucketCache: failedBlockAdditions=0, totalSize=46.00 GB, freeSize=45.90 GB, usedSize=106.77 MB, cacheSize=93.21 MB, accesses=9018587, hits=4350242, IOhitsPerSecond=2, IOTimePerHit=0.03, hitRatio=48.24%, cachingAccesses=4354489, cachingHits=4350242, cachingHitsRatio=99.90%, evictions=0, evicted=234, evictedPerRun=Infinity
Flush Queue / Compaction queue
During the crisis hours when you are facing severe write latencies ,its very important to check how memstore flush queue and compaction queue look like. Lets discuss couple of scenarios here and some possible remedies here (need expert consultation )
A. Flush queue not reducing: This leads us to three additional possibilities :
A.1 Flushes have been suspended for some reason , one such reason could be a condition called “too many store files” seen somewhere down in region server logs (dictated by hbase.hstore.blockingStoreFiles). Check my part 2 article to know more about this parameter and how to tune it. Simply put, this parameter blocks flushing temporarily till minor compaction is completed on existing Hfiles. May be increasing this number few folds at the time of heavy write load should help.
Here , we can even help minor compaction by assigning it more threads so that it finishes compaction of these files faster:
hbase.regionserver.thread.compaction.small (default value is 1 , we can tune it to say 3 )
A.2 Another possibility could be that flusher operation itself is slow and not able to cope up with write traffic which lead to slow down of flushes. We can help this flusher by allocating few more handler threads using :
hbase.hstore.flusher.count (default value is 2, we can bump it to say 4 )
A.3 There is another possibility seen in such cases and which is of “flush storm” , this behavior triggers when number of Write ahead log files reach their defined limit (hbase.regionserver.maxlogs) and and region server is forced to trigger flush on all memstores until WAL files are archived and enough room is created to resume write operations. You would see messages like:
2017-09-23 17:43:49,356 INFO[regionserver//10.22.100.5:16020.logRoller] wal.FSHLog:Too many wals: logs=35, maxlogs=32; forcing flush of 20 regions(s): d4kjnfnkf34335666d03cb1f
Such behaviors could be controlled by bumping up:
hbase.regionserver.maxlogs (default value is 32, double or triple up this number if you know you have a heavy write load )
B.Compaction queue growing : compaction_queue=0:30 —> ( meaning 0 major compactions and 30 minor compactions in queue) . Please note that compaction whether minor or major is an additional IO overhead on system, so whenever you are trying to fix a performance problem by making compactions faster or accommodating more Hfiles in one compaction thread, you must remember that this medicine itself has its own side effects. Nevertheless, we can tune to make compaction more efficient by bumping up following parameters:
Lower bound on number of files in any minor compaction
hbase.hstore.compactionThreshold : ( Default3 )
Upper bound on number of files in any minor compaction.
hbase.hstore.compaction.max ( Default10 )
The number of threads to handle a minor compaction.
hbase.regionserver.thread.compaction.small ( Default1 )
The number of threads to handle a major compaction.
hbase.regionserver.thread.compaction.large ( Default => 1 )
JVM metrics
Various performance issues drill down to JVM level issues , most specifically related togarbage collection STW (stop the world) pauses which bring down application / region server performance or some times brings them down to halt / crash.
There are several possibilities under which you can have long GC pauses and I wont be able to consider them all here. But the least you can do is have your region server’s gc.log file analyzed by one of many online tools such as this , this tool would help you analyze what’s going wrong in your JVM or garbage collection behavior specifically.
Also , if you have huge number of regions on every region server (500 + ) , consider moving to G1 GC algorithm, even though Hortonworks does not officially support it, but we are in process of it and many of our customers have implemented it successfully.
In your spare time, also go through this GC tuning presentation.
Without going into too many details, I would like to mentionfew most basic thumb rules:
With CMS GC algorithm , never set region server heap greater than. 36 - 38 GB, if you have more requirements, switch to G1GC.
Young generation should never be more than 1/8th or 1/10 th of total region server heap.
Start your first tweak in reducing GC pauses by changing -XX:ParallelGCThreads , which is 8 by default and you can safely take it till 16 (watch the number of CPU cores you have though )
Check who contributed to GC pause “user” or “sys” or “real”
2017-10-11T14:06:17.492+0530: 646872.162: [GC [1 CMS-initial-mark: 17454832K(29458432K)] 20202988K(32871808K), 74.6880980 secs] [Times: user=37.61 sys=1.96, real=74.67 secs]
‘real’ time is the total elapsed time of the GC event. This is basically the time that you see in the clock.
‘user’ time is the CPU time spent in user-mode(outside the kernel).
‘Sys’ time is the amount of CPU time spent in the kernel. This means CPU time spent in system calls within the kernel.
Thus in above scenario, “sys” time is very small but still “real” time is very high indicating that GC did not get enough CPU cycles as it needed indicating a heavily resource clogged system from CPU side.
There is another category of pause which we see regularly:
2017-10-20 14:03:42,327 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 9056ms
No GCs detected
This category of Pause indicate that JVM was frozen for about 9 seconds without any GC event. This situation mostly indicate a problem at physical machine side possibly indicating an issue at Memory / CPU or any other OS issue which is causing whole JVM to freeze momentarily.
In part 4 of this series, I will cover some infrastructure level investigation of performance issues.
Also see : PART 1 , PART 2 , PART 4, PART 5.
... View more
Labels: