<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hbase hung command in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Hbase-hung-command/m-p/305864#M222591</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/51637"&gt;@ebythomaspanick&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It appears you are hitting&amp;nbsp;&lt;SPAN&gt;HBASE-20616. If you have verified that no other Procedures are in RUNNABLE State (Except for Truncate &amp;amp; Enable for the concerned Table),&amp;nbsp;Sidelining the MasterProcWALs &amp;amp; Clearing the Temp Directory "/apps/hbase/data/.tmp" would ensure the TruncateTableProcedure aren't retried. Stop the Masters (Active &amp;amp; Standby) during the concerned Step to avoid any issues.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Smarak&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 12 Nov 2020 18:44:45 GMT</pubDate>
    <dc:creator>smdas</dc:creator>
    <dc:date>2020-11-12T18:44:45Z</dc:date>
    <item>
      <title>Hbase hung command</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hbase-hung-command/m-p/287116#M212866</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi,&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I have a cluster running Hbase with 1 Master and 6 RS. Recently I noticed that Hbase commands are sort of queuing up and possibly hung(RUNNABLE state) for one table.&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;When checking the state of the table, I an see that the table is disabled. The command executed to enable the table failed when executing from the Hbase shell stating that process id xxx(previous truncate command) is already running. In the HBase UI, I can see both the truncate and enable commands(shown under Procedures) is in RUNNABLE state.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I tried the kill procedure command on the truncate command but it returns false, indicating the process cannot be killed&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;I have data on other tables and scan commands on these tables work fine.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;What might be the issue here, how can I kill the command running at Hbase and get the table back to working?&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Any Help is much appreciated.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Thomas&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;On checking the Master log, I can see the following Warnings:&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;I&gt;2020-01-08 11:03:28,168 WARN &amp;nbsp;[RegionOpenAndInitThread-buckets_2-10] ipc.Client: interrupted waiting to send rpc request to server&lt;BR /&gt;java.lang.InterruptedException&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.get(FutureTask.java:191)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1094)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.ipc.Client.call(Client.java:1457)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.ipc.Client.call(Client.java:1398)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:818)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:283)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.reflect.Method.invoke(Method.java:498)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:283)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2165)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1438)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:898)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6364)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;......&lt;BR /&gt;2020-01-08 11:03:28,438 DEBUG [WALProcedureStoreSyncThread] wal.WALProcedureStore: Roll new state log: 64132&lt;BR /&gt;2020-01-08 11:03:28,683 DEBUG [ProcedureExecutorThread-28] util.FSTableDescriptors: Current tableInfoPath = hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/.tabledesc/.tableinfo.0000000001&lt;BR /&gt;2020-01-08 11:03:28,685 DEBUG [ProcedureExecutorThread-28] util.FSTableDescriptors: TableInfo already exists.. Skipping creation&lt;BR /&gt;2020-01-08 11:03:28,685 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-1] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,685 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-2] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,685 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-3] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,685 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-4] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,686 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-5] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;....&lt;BR /&gt;2020-01-08 11:03:28,686 WARN &amp;nbsp;[RegionOpenAndInitThread-buckets_2-3] regionserver.HRegionFileSystem: Trying to create a region that already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/2e97482e25a07a7eb17a113535474057&lt;BR /&gt;2020-01-08 11:03:28,686 WARN &amp;nbsp;[RegionOpenAndInitThread-buckets_2-2] regionserver.HRegionFileSystem: Trying to create a region that already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/fbfd55d8d1852193a22875679852b1f2&lt;BR /&gt;2020-01-08 11:03:28,686 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-3] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,686 INFO &amp;nbsp;[RegionOpenAndInitThread-buckets_2-2] regionserver.HRegion: creating HRegion buckets_2 HTD == 'buckets_2', {NAME =&amp;gt; 'b', BLOOMFILTER =&amp;gt; 'ROW', VERSIONS =&amp;gt; '1', IN_MEMORY =&amp;gt; 'false', KEEP_DELETED_CELLS =&amp;gt; 'FALSE', DATA_BLOCK_ENCODING =&amp;gt; 'NONE', TTL =&amp;gt; 'FOREVER', COMPRESSION =&amp;gt; 'NONE', MIN_VERSIONS =&amp;gt; '0', BLOCKCACHE =&amp;gt; 'true', BLOCKSIZE =&amp;gt; '65536', REPLICATION_SCOPE =&amp;gt; '0'} RootDir = hdfs://my-hdfs/apps/hbase/data/.tmp Table name == buckets_2&lt;BR /&gt;2020-01-08 11:03:28,686 WARN &amp;nbsp;[ProcedureExecutorThread-28] procedure.TruncateTableProcedure: Retriable error trying to truncate table=buckets_2 state=TRUNCATE_TABLE_CREATE_FS_LAYOUT&lt;BR /&gt;java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: The specified region already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/2e97482e25a07a7eb17a113535474057&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:186)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:141)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:118)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure$3.createHdfsRegions(CreateTableProcedure.java:361)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:380)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:354)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:113)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.master.procedure.TruncateTableProcedure.executeFromState(TruncateTableProcedure.java:47)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)&lt;BR /&gt;Caused by: java.util.concurrent.ExecutionException: java.io.IOException: The specified region already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/2e97482e25a07a7eb17a113535474057&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.report(FutureTask.java:122)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.get(FutureTask.java:192)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:180)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ... 14 more&lt;BR /&gt;Caused by: java.io.IOException: The specified region already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/2e97482e25a07a7eb17a113535474057&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:900)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6364)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; at java.lang.Thread.run(Thread.java:748)&lt;BR /&gt;2020-01-08 11:03:28,686 WARN &amp;nbsp;[RegionOpenAndInitThread-buckets_2-5] regionserver.HRegionFileSystem: Trying to create a region that already exists on disk: hdfs://my-hdfs/apps/hbase/data/.tmp/data/default/buckets_2/6b0206739eeaeae1894fd54d36986c6e&lt;BR /&gt;2020-01-08 11:03:28,686 WARN &amp;nbsp;[RegionOpenAndInitThread-buckets_2-2] ipc.Client: interrupted waiting to send rpc request to server&lt;BR /&gt;java.lang.InterruptedException&lt;BR /&gt;......&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Jan 2020 10:42:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hbase-hung-command/m-p/287116#M212866</guid>
      <dc:creator>ebythomaspanick</dc:creator>
      <dc:date>2020-01-08T10:42:38Z</dc:date>
    </item>
    <item>
      <title>Re: Hbase hung command</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Hbase-hung-command/m-p/305864#M222591</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/51637"&gt;@ebythomaspanick&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It appears you are hitting&amp;nbsp;&lt;SPAN&gt;HBASE-20616. If you have verified that no other Procedures are in RUNNABLE State (Except for Truncate &amp;amp; Enable for the concerned Table),&amp;nbsp;Sidelining the MasterProcWALs &amp;amp; Clearing the Temp Directory "/apps/hbase/data/.tmp" would ensure the TruncateTableProcedure aren't retried. Stop the Masters (Active &amp;amp; Standby) during the concerned Step to avoid any issues.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Smarak&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 12 Nov 2020 18:44:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Hbase-hung-command/m-p/305864#M222591</guid>
      <dc:creator>smdas</dc:creator>
      <dc:date>2020-11-12T18:44:45Z</dc:date>
    </item>
  </channel>
</rss>

