<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Unable to execute hbase job due to  Unexpected closed connection exception in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104206#M29883</link>
    <description>&lt;P&gt;The above seems to be from client side.&lt;/P&gt;&lt;P&gt;Any exception on the server side ?&lt;/P&gt;&lt;P&gt;Overall, 31457280 might still be exceeded.&lt;/P&gt;&lt;P&gt;There could be other reason(s).&lt;/P&gt;</description>
    <pubDate>Fri, 27 May 2016 00:17:35 GMT</pubDate>
    <dc:creator>tyu</dc:creator>
    <dc:date>2016-05-27T00:17:35Z</dc:date>
    <item>
      <title>Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104197#M29874</link>
      <description>&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/4577-hbase-error.txt"&gt;hbase-error.txt&lt;/A&gt;Unable to execute hbase job due exception-&lt;/P&gt;&lt;P&gt;2016-05-26 15:49:38,270 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]&lt;/P&gt;&lt;P&gt;Thu May 26 15:49:38 UTC 2016, org.apache.hadoop.hbase.client.RpcRetryingCaller@77e92d1b, java.io.IOException: Unexpected closed connection&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:811)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:394)
at com.bizosys.hsearch.hbase.HTableWrapper.get(HTableWrapper.java:100)
at com.arc.hbase.jobs.CacheBuildJob.saveContainer(CacheBuildJob.java:387)
at com.arc.hbase.jobs.CacheBuildJob.save(CacheBuildJob.java:323)
at com.arc.hbase.jobs.CacheBuildJob.exec(CacheBuildJob.java:172)
at com.arc.hbase.jobs.CacheBuildJob.run(CacheBuildJob.java:77)
at com.arc.hbase.jobs.CacheBuildJob.main(CacheBuildJob.java:513)&lt;/P&gt;&lt;P&gt;Attached is full error log.&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:14:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104197#M29874</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-26T23:14:04Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104198#M29875</link>
      <description>&lt;P&gt;Can you check server log on fsdata1c.corp.arc.com to see if there is more clue ?&lt;/P&gt;&lt;P&gt;Which release of hbase are you using ?&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:16:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104198#M29875</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-26T23:16:29Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104199#M29876</link>
      <description>&lt;P&gt;Here is the region server log on &lt;A href="http://fsdata1c.corp.arc.com/"&gt;fsdata1c.corp.arc.com&lt;/A&gt;&lt;/P&gt;&lt;PRE&gt;2016-05-26 13:57:47,158 WARN  [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.respondercallId: 4324 service: ClientService methodName: Scan size: 30 connection: 10.1.1.243:52740: output error
2016-05-26 13:57:47,159 WARN  [RpcServer.handler=55,port=60020] ipc.RpcServer: RpcServer.handler=55,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
2016-05-26 13:58:47,135 INFO  [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 3235538737043012213 lease expired on region CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.
2016-05-26 13:58:52,422 INFO  [RpcServer.reader=8,port=60020] ipc.RpcServer: RpcServer.listener,port=60020: count of bytes read: 0
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:197)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
	at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2224)
	at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1415)
	at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:790)
	at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:581)
	at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:556)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:744)
2016-05-26 13:58:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12076, accesses=3683357, hits=3525415, hitRatio=95.71%, , cachingAccesses=3547447, cachingHits=3448937, cachingHitsRatio=97.22%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 13:59:52,420 INFO  [regionserver60020.leaseChecker] regionserver.HRegionServer: Scanner 594209239597513333 lease expired on region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894.
2016-05-26 14:03:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3720308, hits=3562365, hitRatio=95.75%, , cachingAccesses=3584398, cachingHits=3485887, cachingHitsRatio=97.25%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:08:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3749373, hits=3591430, hitRatio=95.79%, , cachingAccesses=3613463, cachingHits=3514952, cachingHitsRatio=97.27%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:13:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769032, hits=3611089, hitRatio=95.81%, , cachingAccesses=3633122, cachingHits=3534611, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:18:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3769844, hits=3611901, hitRatio=95.81%, , cachingAccesses=3633934, cachingHits=3535423, cachingHitsRatio=97.29%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:23:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3831804, hits=3673861, hitRatio=95.88%, , cachingAccesses=3695894, cachingHits=3597383, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:28:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3832074, hits=3674131, hitRatio=95.88%, , cachingAccesses=3696164, cachingHits=3597653, cachingHitsRatio=97.33%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:33:55,249 DEBUG [LruStats #0] hfile.LruBlockCache: Total=1.18 GB, free=414.12 MB, max=1.58 GB, blocks=12077, accesses=3844554, hits=3686611, hitRatio=95.89%, , cachingAccesses=3708644, cachingHits=3610133, cachingHitsRatio=97.34%, evictions=0, evicted=79712, evictedPerRun=Infinity
2016-05-26 14:38:11,712 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 15478
2016-05-26 14:38:21,712 INFO  [regionserver60020.periodicFlusher] regionserver.HRegionServer: regionserver60020.periodicFlusher requesting flush for region CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894. after a delay of 19133
2016-05-26 14:38:27,190 DEBUG [Thread-20] regionserver.HRegion: Started memstore flush for CUTOFF4,,1464012806340.48ec64624ad37ae9272c5c28ec177894., current region memstore size 93.2 M
2016-05-26 14:38:27,401 INFO  [Thread-20] regionserver.DefaultStoreFlusher: Flushed, sequenceid=96047, memsize=31.7 M, hasBloomFilter=true, into tmp file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a
2016-05-26 14:38:27,411 DEBUG [Thread-20] regionserver.HRegionFileSystem: Committing store file hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/.tmp/6d1e9fe6186a448f9a322c73ecf4ad0a as hdfs://fsmaster1c.corp.arc.com:8020/apps/hbase/data/data/default/CUTOFF4/48ec64624ad37ae9272c5c28ec177894/1/6d1e9fe6186a448f9a322c73ecf4ad0a
&lt;/PRE&gt;</description>
      <pubDate>Thu, 26 May 2016 23:30:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104199#M29876</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-26T23:30:56Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104200#M29877</link>
      <description>&lt;P&gt;It seems possible that your application might be prematurely closing the HBase `Connection` while your tasks are still using HTable instances that use that Connection. Have you investigated this at all? Any chance you can share more about your application?&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:31:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104200#M29877</guid>
      <dc:creator>elserj</dc:creator>
      <dc:date>2016-05-26T23:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104201#M29878</link>
      <description>&lt;P&gt;Below is code-&lt;/P&gt;&lt;PRE&gt;package com.arc.hbase.jobs;


import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;


import org.apache.hadoop.hbase.client.Durability;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;


import com.arc.datasink.HBaseTables;
import com.arc.management.MonitorCacheBuildJob;
import com.arc.management.MonitorCollector;
import com.arc.management.MonitorMeasure;
import com.arc.rest.common.BytesMergerContainer;
import com.arc.rest.common.BytesMergerObject;
import com.arc.rest.common.ChangeDecorator;
import com.arc.rest.common.ChangeSetDecorator;
import com.arc.rest.common.ObjectKey;
import com.arc.rest.common.PostChanges_3_0.Changes;
import com.arc.util.ArcConfig;
import com.arc.util.FileSyncLog;
import com.arc.util.LineReaderUtil;
import com.bizosys.hsearch.hbase.HBaseFacade;
import com.bizosys.hsearch.hbase.HTableWrapper;


public class CacheBuildJob 
{
	private static final boolean INFO_ENABLED = FileSyncLog.l.isInfoEnabled();
	private static final char SEPARATOR_OBJID = ',';
	private static CacheBuildJob instance = null;
	
	private static final boolean MONITOR_JOB = ArcConfig.MONITOR_CACHE_BUILD_JOB;
	
	public static CacheBuildJob getInstance() 
	{
		if ( null != instance) return instance;
		synchronized (CacheBuildJob.class) 
		{
			if ( null != instance ) return instance;
			instance = new CacheBuildJob();
		}
		return instance;
	}	
	
	long lastUpdatedTime= new Date().getTime() - ArcConfig.CACHE_BUILD_START_INTERVAL;
	int lastUpdatedDt = new Date(lastUpdatedTime).getDate();
	
	private CacheBuildJob()
	{
	}
	
	boolean isRunning  = false;
	public void run()
	{
		if ( isRunning  ) 
		{
			if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SKIPPED");
			return;
		}
		
		isRunning  = true;
		long start = System.currentTimeMillis();
		
		try 
		{
			exec();
			long end = System.currentTimeMillis();
			if ( INFO_ENABLED ) FileSyncLog.l.info(new Date().toLocaleString() + " Cache Build Job SUCESS, Run time in ms: " + (end - start));
		} 
		catch (Exception ex) 
		{
			long end = System.currentTimeMillis();
			ex.printStackTrace();
			FileSyncLog.l.fatal(new Date().toLocaleString() + " Cache Build Job FAILED, Run time in ms: " + (end - start));
		} 
		finally 
		{
			isRunning = false;
		}
	}
	
	public void exec() throws Exception
	{
		long currentTime = System.currentTimeMillis();
		//Take the clock 2 sec to past 
		currentTime = currentTime - ArcConfig.CACHEBUILD_JOB_RUN_INTERVAL;
		
		if ( lastUpdatedTime &amp;gt;= currentTime) {
			FileSyncLog.l.warn("CacheBuildJob : lastUpdatedTime &amp;gt;= currentTime :  " + lastUpdatedTime + "&amp;gt;=" + currentTime);
			return;
		}
		
		Date now = new Date(currentTime);
		long startTime = currentTime;
		int currentUpdatedDt = now.getDate();


		Map&amp;lt;String, String&amp;gt; uniqueCotainerKeyWithObjectKeys = new HashMap&amp;lt;String, String&amp;gt;(1024);
		List&amp;lt;ChangeDecorator.Deser&amp;gt; timeseriesChanges = new ArrayList&amp;lt;ChangeDecorator.Deser&amp;gt;(1024);
		
		if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges: Start");
		if ( MONITOR_JOB ) MonitorCacheBuildJob.getInstance().onEnter();
		
		//Step1 - Get last left rows from the old table
		if ( lastUpdatedDt != currentUpdatedDt) 
		{
//			String tsTable = HBaseTables.getLastTimeSeriesTable(currentUpdatedDt);
			String tsTable = HBaseTables.getLastTimeSeriesTable(currentTime);
			addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
		}
		
		if ( INFO_ENABLED) FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + " Projects:" + uniqueCotainerKeyWithObjectKeys.size());
		
		//Step2 - Get from current table
//		String tsTable = HBaseTables.getTimeSeriesTable(currentUpdatedDt);
		String tsTable = HBaseTables.getTimeSeriesTable(currentTime);
		addToTimeseriesChanges(tsTable, startTime, uniqueCotainerKeyWithObjectKeys, timeseriesChanges);
		if ( INFO_ENABLED)
			FileSyncLog.l.info("AddToTimeseriesChanges, Changes: " + timeseriesChanges.size() + "  Projects:" + uniqueCotainerKeyWithObjectKeys.size());
		
		//Step3 -Merge with cutoff table.
		String currentCutoffTableName = HBaseTables.getCutoffTable(currentUpdatedDt);
		String lastCutoffTableName = HBaseTables.getLastCutoffTable(currentUpdatedDt);
		
		HBaseFacade facade = null;
		HTableWrapper currentCutoffTable = null;
		HTableWrapper lastCutoffTable = null;
		
		long cutoffTime = startTime - HBaseTables.CUTOFF_DURATION_SECS * 1000;
		
		/**
		 * We have all the ChangeDecorators. Next Steps
		 * &amp;gt; 
		 */
		
		try {
			facade = HBaseFacade.getInstance();
			if ( INFO_ENABLED) {
				FileSyncLog.l.info("Current Cutoff Table: " + currentCutoffTableName + 
					" (Size)" + timeseriesChanges.size() + " (Cutoff Limit)" + new Date(cutoffTime));
			}
			currentCutoffTable = facade.getTable(currentCutoffTableName);
			lastCutoffTable = facade.getTable(lastCutoffTableName);
//			System.out.println("TimeSeriesTable - "+tsTable+"\tCurrent Cutoff Table - "+ currentCutoffTableName+"\tLast Cutoff Table - "+ lastCutoffTableName + " on " + now.toString() + " time in millis "+ now.getTime() + " currentTime " + currentTime);
			int batchSize = ArcConfig.CACHE_BUILD_BATCH_SIZE;
			Map&amp;lt;String, ChangeSetDecorator.Deser&amp;gt; objKeyWithChangeSets = 
				new HashMap&amp;lt;String, ChangeSetDecorator.Deser&amp;gt;(1024);
			List&amp;lt;Put&amp;gt; putL = new ArrayList&amp;lt;Put&amp;gt;(batchSize);
			


			for (ChangeDecorator.Deser deserCh : timeseriesChanges) 
			{
				deserCh.touch(System.currentTimeMillis());
				String objectKey = deserCh.objectKey;
				
				/**
				 * Batch Flush on 4096 objects = 4MB
				 */
//				System.out.println("CacheBuild Time -: "+(System.currentTimeMillis()-deserCh.getTime()));
				if ( objKeyWithChangeSets.size() &amp;gt;= batchSize ) 
				{
					if ( INFO_ENABLED)  FileSyncLog.l.info("Saving: Enter");
					save(currentCutoffTable, lastCutoffTable, objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
					if ( INFO_ENABLED)  FileSyncLog.l.info("Saving: Exit");
				}
				
				
				/**
				 * Step: 1 # Memory Table Lookup, 
				 * If any object id changes are already there, means already loaded and read.
				 * Just merge to it.
				 */
				if (objKeyWithChangeSets.containsKey(objectKey)) {
					
					if ( INFO_ENABLED)  FileSyncLog.l.info("Memory Table Lookup: " + objectKey);
					ChangeSetDecorator.Deser mergedVal = 
						createObjChangeSets(objKeyWithChangeSets.get(objectKey), deserCh, cutoffTime);
					
					mergedVal.key = objectKey;
					mergedVal.itemId = deserCh.getChanges().getItemId();
					
					objKeyWithChangeSets.put(objectKey, mergedVal);
					continue;
				}
				
				Get getter = new Get(objectKey.getBytes());
				
				/**
				 * Step: 2 # Look in current cutoff Table,
				 */
				Result resultC = currentCutoffTable.get(getter);
				
				{
					if ( null != resultC) {


						byte[] val = resultC.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
						int valSize = ( null == val) ? 0 : val.length;
						if ( valSize == 0 ) val = null;
						if ( null != val ) {
							
							if ( INFO_ENABLED)  FileSyncLog.l.info("Curent cutoff table Lookup: " + objectKey);
							ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
							cs.key = objectKey;
							cs.itemId = deserCh.getChanges().getItemId();


							ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
							objKeyWithChangeSets.put(objectKey, mergedVal);
							continue;
						}
					}
				}


				/**
				 * Step: 3 # Fall back to last cutoff table as does not exist in 
				 * current cut off table.
				 */


				Result resultO = lastCutoffTable.get(getter);
				
				if ( null != resultO) {
					byte[] val = resultO.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
					int valSize = ( null == val) ? 0 : val.length;
					if ( valSize == 0 ) val = null;


					if ( null != val ) {


						if ( INFO_ENABLED)  FileSyncLog.l.info("Previous cutoff table Lookup: " + objectKey);
						ChangeSetDecorator.Deser cs = new ChangeSetDecorator.Deser(val);
						cs.key = objectKey;
						cs.itemId = deserCh.getChanges().getItemId();


						ChangeSetDecorator.Deser mergedVal = createObjChangeSets(cs, deserCh, cutoffTime);
						objKeyWithChangeSets.put(objectKey, mergedVal);
						continue;
					}
				}
				
				/**
				 * We didn't find in current or last cutoff table.
				 * It is a fresh change. Boot strap the changes 
				 */


				if ( INFO_ENABLED)  FileSyncLog.l.info("Bootstrapping: " + objectKey);
				ChangeSetDecorator.Deser none = new ChangeSetDecorator.Deser(null);
				none.key = objectKey;
				none.itemId = deserCh.getChanges().getItemId();
				ChangeSetDecorator.Deser mergedVal = createObjChangeSets(none, deserCh, -1);
				objKeyWithChangeSets.put(objectKey, mergedVal);
			}
			
			if ( objKeyWithChangeSets.size() &amp;gt;= 0 ) {
				save(currentCutoffTable, lastCutoffTable, 
					objKeyWithChangeSets, uniqueCotainerKeyWithObjectKeys, cutoffTime, putL);
			}
			
			/**
			 * Step: 4 # All sucess, move to next time stamp
			 */
			lastUpdatedTime = startTime; //maximum timestamp value, exclusive
			lastUpdatedDt = currentUpdatedDt;
			
		} catch (Exception ex) {
		
			throw ex;


		} finally {
			if ( null != facade &amp;amp;&amp;amp; null != currentCutoffTable) facade.putTable(currentCutoffTable);
			if ( null != facade &amp;amp;&amp;amp; null != lastCutoffTable) facade.putTable(lastCutoffTable);
		}
		
		long endTime = System.currentTimeMillis();


		if ( MONITOR_JOB ) 
		{
			long timeTaken = (endTime - startTime);
			MonitorCollector collector = new MonitorCollector();
			collector.add(new MonitorMeasure("CacheBuildJob", timeTaken));
			MonitorCacheBuildJob.getInstance().onExit(collector);
		}
	}


	/**
	 * 
	 * @param currentCutoffTable
	 * @param lastCutoffTable
	 * @param objKeyWithChangeSets
	 * @param uniqueProjectIdWithObjIds
	 * @param cutoffTime
	 * @param putL
	 * @throws IOException
	 */
	private void save(HTableWrapper currentCutoffTable, HTableWrapper lastCutoffTable,
			Map&amp;lt;String, ChangeSetDecorator.Deser&amp;gt; objKeyWithChangeSets, 
			Map&amp;lt;String, String&amp;gt; uniqueProjectIdWithObjIds,  
			long cutoffTime, List&amp;lt;Put&amp;gt; putL)
			throws IOException {
		
		putL.clear();
		for (String key : objKeyWithChangeSets.keySet()) {
			ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(key);
			
			Put update = new Put(key.getBytes());
			update.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, val.data);
			update.setDurability(Durability.SYNC_WAL);
			putL.add(update);
		}
		currentCutoffTable.put(putL);
		if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Objects Added - " + putL.size());
		putL.clear();




		saveContainer(currentCutoffTable, lastCutoffTable,
			objKeyWithChangeSets, uniqueProjectIdWithObjIds, cutoffTime);


		currentCutoffTable.flushCommits();
	
		objKeyWithChangeSets.clear();
		
	}


	/**
	 * 
	 * @param currentCutoffTable
	 * @param lastCutoffTable
	 * @param objKeyWithChangeSets
	 * @param uniqueProjectIdWithObjIds
	 * @param cutoffTime
	 * @throws IOException
	 */
	private void saveContainer(HTableWrapper currentCutoffTable,
		HTableWrapper lastCutoffTable,
		Map&amp;lt;String, ChangeSetDecorator.Deser&amp;gt; objKeyWithChangeSets,
		Map&amp;lt;String, String&amp;gt; uniqueProjectIdWithObjIds, long cutoffTime)
		throws IOException {


		/**
		 * mergeContainerChanges for the current projects
		 */
		List&amp;lt;String&amp;gt; objKeyL = new ArrayList&amp;lt;String&amp;gt;();
		Set&amp;lt;ChangeSetDecorator.Deser&amp;gt; containerObjects = new HashSet&amp;lt;ChangeSetDecorator.Deser&amp;gt;();


		for (String projectId : uniqueProjectIdWithObjIds.keySet()) {


			objKeyL.clear();
			containerObjects.clear();


			/**
			 * Find out all object Ids belonging to this project and in current set
			 */
			String objectKeys = uniqueProjectIdWithObjIds.get(projectId);
			LineReaderUtil.fastSplit(objKeyL, objectKeys, SEPARATOR_OBJID);
			for (String objKey : objKeyL) {
				
				ChangeSetDecorator.Deser val = objKeyWithChangeSets.get(objKey);
				if ( null != val) containerObjects.add( val);
			}
			
			if ( INFO_ENABLED) FileSyncLog.l.info( "projectId:" + projectId + " ,Objects =" + containerObjects.size());
			byte[] projectIdB = projectId.getBytes();
			Get containerId = new Get(projectIdB);
			
			/**
			 * Look the changes in current cutoff table.
			 */
			byte[] containerCell = null;
			Result res = currentCutoffTable.get(containerId);
			if ( null != res) {
				containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
			}
			
			/**
			 * The project changes are not available in current cutoff table.
			 */
			int containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
			if ( containerCellSize == 0 ) {
				res = lastCutoffTable.get(containerId);
				if ( null != res) {
					containerCell = res.getValue(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME);
				}
			}
			containerCellSize = ( null == containerCell) ? 0 : containerCell.length;
			if ( containerCellSize == 0 ) containerCell = null;
			
			/**
			 * Merge the data
			 */
			if ( INFO_ENABLED ) FileSyncLog.l.info("containerCell:" + 
				( (null == containerCell) ? 0 : containerCell.length) ) ;
			
			byte[] containerCellUpdated = BytesMergerContainer.mergeContainerChangesD(
				containerCell, containerObjects, cutoffTime, -1L);


			if ( INFO_ENABLED ) FileSyncLog.l.info("containerCellUpdated:" + 
					( (null == containerCellUpdated) ? 0 : containerCellUpdated.length) ) ;
			
			/**
			 * Save to current cutoff table
			 */
			Put containerUpdate = new Put(projectIdB);
			containerUpdate.add(HBaseTables.FAMILY_NAME,HBaseTables.COL_NAME, containerCellUpdated);
			containerUpdate.setDurability(Durability.SYNC_WAL);
			currentCutoffTable.put(containerUpdate);
		}
		if ( INFO_ENABLED) FileSyncLog.l.info("Cutoff Table Containers Added - " + uniqueProjectIdWithObjIds.size());
	}
	
	/**
	 * 
	 * @param existingCutoffBytes
	 * @param currentChanges
	 * @param cutoffTime
	 * @return
	 * @throws IOException
	 */
	public final ChangeSetDecorator.Deser createObjChangeSets(final ChangeSetDecorator.Deser existingCutoffBytes, 
		final ChangeDecorator.Deser currentChanges, final long cutoffTime) throws IOException {
		
		byte[] data = BytesMergerObject.mergeObjectChanges(
			existingCutoffBytes.data, currentChanges, cutoffTime, -1);
		existingCutoffBytes.data = data;
		return existingCutoffBytes;
	}
	
	/**
	 * 
	 * @param tsTableCurrent
	 * @param startTime
	 * @param uniqueProjectIds
	 * @param uniqueObjIds
	 */
	public void addToTimeseriesChanges(String tsTableCurrent, long startTime, 
			Map&amp;lt;String, String&amp;gt; uniqueContainerKeyWithObjectKeys, List&amp;lt;ChangeDecorator.Deser&amp;gt; timeseriesChanges) {
		
		HBaseFacade facade = null;
		HTableWrapper table = null;
		ResultScanner scanner = null;
		
		try {
					
			facade = HBaseFacade.getInstance();
			table = facade.getTable(tsTableCurrent);
			Scan scan = new Scan();
			scan.setCaching(1024);
			scan.setMaxVersions(1);
			scan.setTimeRange(lastUpdatedTime, startTime);
			scan = scan.addColumn(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
			
			scanner = table.getScanner(scan);
			
			StringBuilder keyBuilder = new StringBuilder();
			int counter = 0;


			for (Result r: scanner) {
				if ( null == r) continue;
				if ( r.isEmpty()) continue;


				counter++;
				if ( counter % 1000 == 0 ) FileSyncLog.l.info(tsTableCurrent + " read : " + counter);
				
				byte[] changeB = r.getValue(HBaseTables.FAMILY_NAME, HBaseTables.COL_NAME);
				int changeBSize = ( null == changeB) ? 0 : changeB.length;
				if ( changeBSize == 0 ) continue;


				if ( INFO_ENABLED) FileSyncLog.l.info("Inside AddToTimeSeries: changeB: "+changeB.toString());
				ChangeDecorator.Deser currentChangeDeser = new ChangeDecorator.Deser(changeB);
				Changes currentChange = currentChangeDeser.getChanges();
				
				//Add to Unique Projects
				String containerKey = ObjectKey.getContainerKey(keyBuilder,currentChange);
				String objectKey = ObjectKey.getObjectKey(keyBuilder,currentChange);


				currentChangeDeser.objectKey = objectKey;
				
				if (uniqueContainerKeyWithObjectKeys.containsKey(containerKey)) {
					uniqueContainerKeyWithObjectKeys.put(containerKey, 
						uniqueContainerKeyWithObjectKeys.get(containerKey) + SEPARATOR_OBJID + objectKey);
				} else {
					uniqueContainerKeyWithObjectKeys.put(containerKey, objectKey);					
				}
				
				//Merge Actions of a Object.
				timeseriesChanges.add(currentChangeDeser);
			}
		
		} catch (Exception e) {
		
			FileSyncLog.l.fatal("Unable to execute daily update job. " ,e);
		
		} finally {
			if (null != scanner) {
				try {scanner.close();} catch (Exception ex) {ex.printStackTrace();}
			}
			
			if (null != table) {
				try {facade.putTable(table);} catch (Exception ex) {ex.printStackTrace();}
			}
		}
	}
	
	public static void main(String[] args) throws Exception {
		CacheBuildJob.getInstance().run();		
	}
}
&lt;/PRE&gt;</description>
      <pubDate>Thu, 26 May 2016 23:42:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104201#M29878</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-26T23:42:56Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104202#M29879</link>
      <description>&lt;P&gt;Getting below exception-&lt;/P&gt;&lt;P&gt;2016-05-26 16:46:54,288 INFO  [main] util.FileSyncLog: containerCell:10412488
2016-05-26 16:46:54,298 INFO  [main] util.FileSyncLog: containerCellUpdated:10538784
java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1353)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:989)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:953)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.put(HTablePool.java:432)
at com.bizosys.hsearch.hbase.HTableWrapper.put(HTableWrapper.java:117)
at com.arc.hbase.jobs.CacheBuildJob.saveContainer(CacheBuildJob.java:410)
at com.arc.hbase.jobs.CacheBuildJob.save(CacheBuildJob.java:320)
at com.arc.hbase.jobs.CacheBuildJob.exec(CacheBuildJob.java:171)
at com.arc.hbase.jobs.CacheBuildJob.run(CacheBuildJob.java:75)
at com.arc.hbase.jobs.CacheBuildJob.main(CacheBuildJob.java:509)&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:47:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104202#M29879</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-26T23:47:56Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104203#M29880</link>
      <description>&lt;P&gt;bq. IllegalArgumentException: KeyValue size too large at&lt;/P&gt;&lt;P&gt;That was where the problem was.&lt;/P&gt;&lt;P&gt;Please check the size of KeyValue on client side.&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:49:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104203#M29880</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-26T23:49:11Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104204#M29881</link>
      <description>&lt;P&gt;Here is the default value for the related config:&lt;/P&gt;&lt;P&gt;&amp;lt;name&amp;gt;hbase.client.keyvalue.maxsize&amp;lt;/name&amp;gt;
  &amp;lt;value&amp;gt;10485760&amp;lt;/value&amp;gt;&lt;/P&gt;</description>
      <pubDate>Thu, 26 May 2016 23:49:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104204#M29881</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-26T23:49:58Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104205#M29882</link>
      <description>&lt;P&gt;I increased value from 10485760B to 31457280B. Now getting following exception-&lt;/P&gt;&lt;P&gt;2016-05-26 15:39:53,949 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:55,266 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:56,773 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:39:58,785 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
2016-05-26 15:40:01,809 WARN  [main] ipc.RpcClient: Unexpected closed connection: Thread[IPC Client (1554225521) connection to fsdata1c.corp.arc.com/10.1.1.243:60020 from hdfs,5,]
&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 00:14:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104205#M29882</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-27T00:14:50Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104206#M29883</link>
      <description>&lt;P&gt;The above seems to be from client side.&lt;/P&gt;&lt;P&gt;Any exception on the server side ?&lt;/P&gt;&lt;P&gt;Overall, 31457280 might still be exceeded.&lt;/P&gt;&lt;P&gt;There could be other reason(s).&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 00:17:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104206#M29883</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-27T00:17:35Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104207#M29884</link>
      <description>&lt;P&gt;Attached is Hbase Master Error Details.&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/4578-hbase-master.txt"&gt;hbase-master.txt&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 00:25:47 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104207#M29884</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-27T00:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104208#M29885</link>
      <description>&lt;P&gt;The master log started with '2016-05-26 17:10:57,018'&lt;/P&gt;&lt;P&gt;Please find the region server(s) which served the write requests and see if there is some clue in the server log.&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 00:31:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104208#M29885</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-27T00:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104209#M29886</link>
      <description>&lt;P&gt;Attached is hbase region server&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/4580-hbase-regionserver.txt"&gt;hbase-regionserver.txt&lt;/A&gt; log.&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 01:05:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104209#M29886</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-27T01:05:55Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104210#M29887</link>
      <description>&lt;P&gt;I looked at the two region server logs which started at:&lt;/P&gt;&lt;P&gt;2016-05-26 18:01:52,787&lt;/P&gt;&lt;P&gt;The above didn't overlap with the latest client side error.&lt;/P&gt;&lt;P&gt;Was there any error after 18:01 ?&lt;/P&gt;&lt;P&gt;BTW I didn't see error or exception in the last two attachments.&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 01:14:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104210#M29887</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-27T01:14:54Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104211#M29888</link>
      <description>&lt;P&gt;No there is no error after 18.01.&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 01:16:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104211#M29888</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-27T01:16:42Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104212#M29889</link>
      <description>&lt;P&gt;Below is server log (hbase master log)-&lt;/P&gt;&lt;PRE&gt;2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t24] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,018 DEBUG [AM.ZK.Worker-pool2-t40] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,029 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 DEBUG [AM.ZK.Worker-pool2-t25] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,049 INFO  [AM.ZK.Worker-pool2-t25] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=PENDING_OPEN, ts=1464282656677, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO  [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=PENDING_OPEN, ts=1464282656711, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,050 INFO  [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=PENDING_OPEN, ts=1464282656611, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_CLOSED, server=fsdata2c.corp.arc.com,60020,1464282352148, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,162 DEBUG [AM.ZK.Worker-pool2-t30] handler.ClosedRegionHandler: Handling CLOSED event for 44e206d15b62ed4d452545242bd105cd
2016-05-26 17:10:57,162 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_CLOSE, ts=1464282656595, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Found an existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. destination server is fsdata1c.corp.arc.com,60020,1464282355492 accepted as a dest server = true
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Using pre-existing plan for CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd.; plan=hri=CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd., src=fsdata2c.corp.arc.com,60020,1464282352148, dest=fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,163 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=CLOSED, ts=1464282657162, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148}
2016-05-26 17:10:57,163 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Creating (or updating) unassigned node 44e206d15b62ed4d452545242bd105cd with OFFLINE state
2016-05-26 17:10:57,167 INFO  [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Assigning CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. to fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,167 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OFFLINE, ts=1464282657163, server=fsdata2c.corp.arc.com,60020,1464282352148} to {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,176 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling M_ZK_REGION_OFFLINE, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=047c1d0ed29a28ddb22b3cbbcb787675, current_state={047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {047c1d0ed29a28ddb22b3cbbcb787675 state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=87df75ff7669a91c17cd903d5a9f3a3e, current_state={87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 INFO  [AM.ZK.Worker-pool2-t27] master.RegionStates: Transitioned {87df75ff7669a91c17cd903d5a9f3a3e state=OPENING, ts=1464282657050, server=fsdata1c.corp.arc.com,60020,1464282355492} to {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,403 DEBUG [AM.ZK.Worker-pool2-t27] handler.OpenedRegionHandler: Handling OPENED of 87df75ff7669a91c17cd903d5a9f3a3e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=fc6e430b85da20d6ede2cd47a8288519, current_state={fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 INFO  [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {fc6e430b85da20d6ede2cd47a8288519 state=OPENING, ts=1464282657049, server=fsdata1c.corp.arc.com,60020,1464282355492} to {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,404 DEBUG [AM.ZK.Worker-pool2-t34] handler.OpenedRegionHandler: Handling OPENED of fc6e430b85da20d6ede2cd47a8288519 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,408 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 047c1d0ed29a28ddb22b3cbbcb787675 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF3,C31\x09166,1463559795389.047c1d0ed29a28ddb22b3cbbcb787675. deleted, state: {047c1d0ed29a28ddb22b3cbbcb787675 state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,410 DEBUG [AM.ZK.Worker-pool2-t34] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node fc6e430b85da20d6ede2cd47a8288519 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined 047c1d0ed29a28ddb22b3cbbcb787675 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined 047c1d0ed29a28ddb22b3cbbcb787675 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,409 DEBUG [AM.ZK.Worker-pool2-t27] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 87df75ff7669a91c17cd903d5a9f3a3e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t27] master.AssignmentManager: Znode MON,O11\x09154548\x093\x09154524183,1456831930257.87df75ff7669a91c17cd903d5a9f3a3e. deleted, state: {87df75ff7669a91c17cd903d5a9f3a3e state=OPEN, ts=1464282657403, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t27] master.RegionStates: Onlined 87df75ff7669a91c17cd903d5a9f3a3e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t27] master.RegionStates: Offlined 87df75ff7669a91c17cd903d5a9f3a3e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,411 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Znode MON,O11\x09154769\x093\x09151995813,1443695471549.fc6e430b85da20d6ede2cd47a8288519. deleted, state: {fc6e430b85da20d6ede2cd47a8288519 state=OPEN, ts=1464282657404, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t34] master.RegionStates: Onlined fc6e430b85da20d6ede2cd47a8288519 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,411 INFO  [AM.ZK.Worker-pool2-t34] master.RegionStates: Offlined fc6e430b85da20d6ede2cd47a8288519 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t23] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO  [AM.ZK.Worker-pool2-t23] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=PENDING_OPEN, ts=1464282656716, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 DEBUG [AM.ZK.Worker-pool2-t36] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,415 INFO  [AM.ZK.Worker-pool2-t36] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=PENDING_OPEN, ts=1464282656633, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,416 INFO  [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=PENDING_OPEN, ts=1464282656664, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=016ed857886cbcc5eb088b069484218e, current_state={016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {016ed857886cbcc5eb088b069484218e state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,508 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 016ed857886cbcc5eb088b069484218e from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 016ed857886cbcc5eb088b069484218e in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,512 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode TSE,,1464258676776.016ed857886cbcc5eb088b069484218e. deleted, state: {016ed857886cbcc5eb088b069484218e state=OPEN, ts=1464282657508, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,512 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 016ed857886cbcc5eb088b069484218e on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,512 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 016ed857886cbcc5eb088b069484218e from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO  [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=PENDING_OPEN, ts=1464282656656, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=af3ed22bb54a2052eaca660332714462, current_state={af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 INFO  [AM.ZK.Worker-pool2-t22] master.RegionStates: Transitioned {af3ed22bb54a2052eaca660332714462 state=OPENING, ts=1464282657416, server=fsdata1c.corp.arc.com,60020,1464282355492} to {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,518 DEBUG [AM.ZK.Worker-pool2-t22] handler.OpenedRegionHandler: Handling OPENED of af3ed22bb54a2052eaca660332714462 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node af3ed22bb54a2052eaca660332714462 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,522 DEBUG [AM.ZK.Worker-pool2-t22] master.AssignmentManager: Znode CUTOFF2,C31\x09164,1462813183940.af3ed22bb54a2052eaca660332714462. deleted, state: {af3ed22bb54a2052eaca660332714462 state=OPEN, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,522 INFO  [AM.ZK.Worker-pool2-t22] master.RegionStates: Onlined af3ed22bb54a2052eaca660332714462 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,522 INFO  [AM.ZK.Worker-pool2-t22] master.RegionStates: Offlined af3ed22bb54a2052eaca660332714462 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,526 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,526 INFO  [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=PENDING_OPEN, ts=1464282656691, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=ff0af29a7f8e111fc4d46c7a30a17459, current_state={ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,535 INFO  [AM.ZK.Worker-pool2-t39] master.RegionStates: Transitioned {ff0af29a7f8e111fc4d46c7a30a17459 state=OPENING, ts=1464282657415, server=fsdata1c.corp.arc.com,60020,1464282355492} to {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,536 DEBUG [AM.ZK.Worker-pool2-t39] handler.OpenedRegionHandler: Handling OPENED of ff0af29a7f8e111fc4d46c7a30a17459 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,539 DEBUG [AM.ZK.Worker-pool2-t39] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node ff0af29a7f8e111fc4d46c7a30a17459 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,540 DEBUG [AM.ZK.Worker-pool2-t39] master.AssignmentManager: Znode MONE,O31\x09156336\x093\x09152045625,1463618251297.ff0af29a7f8e111fc4d46c7a30a17459. deleted, state: {ff0af29a7f8e111fc4d46c7a30a17459 state=OPEN, ts=1464282657535, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,540 INFO  [AM.ZK.Worker-pool2-t39] master.RegionStates: Onlined ff0af29a7f8e111fc4d46c7a30a17459 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,540 INFO  [AM.ZK.Worker-pool2-t39] master.RegionStates: Offlined ff0af29a7f8e111fc4d46c7a30a17459 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,543 DEBUG [AM.ZK.Worker-pool2-t26] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,543 INFO  [AM.ZK.Worker-pool2-t26] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=PENDING_OPEN, ts=1464282656630, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=a438c3fccb4dffce6c3f2fb2a217ff18, current_state={a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,570 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Transitioned {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPENING, ts=1464282657526, server=fsdata1c.corp.arc.com,60020,1464282355492} to {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,571 DEBUG [AM.ZK.Worker-pool2-t31] handler.OpenedRegionHandler: Handling OPENED of a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node a438c3fccb4dffce6c3f2fb2a217ff18 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,574 DEBUG [AM.ZK.Worker-pool2-t31] master.AssignmentManager: Znode CUTOFF1,,1463913041538.a438c3fccb4dffce6c3f2fb2a217ff18. deleted, state: {a438c3fccb4dffce6c3f2fb2a217ff18 state=OPEN, ts=1464282657570, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,574 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Onlined a438c3fccb4dffce6c3f2fb2a217ff18 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,574 INFO  [AM.ZK.Worker-pool2-t31] master.RegionStates: Offlined a438c3fccb4dffce6c3f2fb2a217ff18 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,576 DEBUG [AM.ZK.Worker-pool2-t34] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,576 INFO  [AM.ZK.Worker-pool2-t34] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=PENDING_OPEN, ts=1464282656702, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f5836191f2d1a9806269864db4287786, current_state={f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 INFO  [AM.ZK.Worker-pool2-t35] master.RegionStates: Transitioned {f5836191f2d1a9806269864db4287786 state=OPENING, ts=1464282657518, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,584 DEBUG [AM.ZK.Worker-pool2-t35] handler.OpenedRegionHandler: Handling OPENED of f5836191f2d1a9806269864db4287786 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f5836191f2d1a9806269864db4287786 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,587 DEBUG [AM.ZK.Worker-pool2-t35] master.AssignmentManager: Znode MONE,O31\x09145411\x092\x091526,1452771105934.f5836191f2d1a9806269864db4287786. deleted, state: {f5836191f2d1a9806269864db4287786 state=OPEN, ts=1464282657584, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,587 INFO  [AM.ZK.Worker-pool2-t35] master.RegionStates: Onlined f5836191f2d1a9806269864db4287786 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,587 INFO  [AM.ZK.Worker-pool2-t35] master.RegionStates: Offlined f5836191f2d1a9806269864db4287786 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,594 DEBUG [AM.ZK.Worker-pool2-t33] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,594 INFO  [AM.ZK.Worker-pool2-t33] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=PENDING_OPEN, ts=1464282656745, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=1e9ca9e463ebd7af1fbd910bd2d570a6, current_state={1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Transitioned {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPENING, ts=1464282657543, server=fsdata1c.corp.arc.com,60020,1464282355492} to {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,625 DEBUG [AM.ZK.Worker-pool2-t21] handler.OpenedRegionHandler: Handling OPENED of 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 1e9ca9e463ebd7af1fbd910bd2d570a6 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,633 DEBUG [AM.ZK.Worker-pool2-t21] master.AssignmentManager: Znode MONE,,1447155053918.1e9ca9e463ebd7af1fbd910bd2d570a6. deleted, state: {1e9ca9e463ebd7af1fbd910bd2d570a6 state=OPEN, ts=1464282657625, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,633 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Onlined 1e9ca9e463ebd7af1fbd910bd2d570a6 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,633 INFO  [AM.ZK.Worker-pool2-t21] master.RegionStates: Offlined 1e9ca9e463ebd7af1fbd910bd2d570a6 from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,645 DEBUG [AM.ZK.Worker-pool2-t37] master.AssignmentManager: Handling RS_ZK_REGION_OPENING, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,646 INFO  [AM.ZK.Worker-pool2-t37] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=PENDING_OPEN, ts=1464282657167, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=f07d37734f7d70dd47f1545345b772e9, current_state={f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 INFO  [AM.ZK.Worker-pool2-t28] master.RegionStates: Transitioned {f07d37734f7d70dd47f1545345b772e9 state=OPENING, ts=1464282657594, server=fsdata1c.corp.arc.com,60020,1464282355492} to {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,657 DEBUG [AM.ZK.Worker-pool2-t28] handler.OpenedRegionHandler: Handling OPENED of f07d37734f7d70dd47f1545345b772e9 from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node f07d37734f7d70dd47f1545345b772e9 in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,661 DEBUG [AM.ZK.Worker-pool2-t28] master.AssignmentManager: Znode TSO,,1464172262155.f07d37734f7d70dd47f1545345b772e9. deleted, state: {f07d37734f7d70dd47f1545345b772e9 state=OPEN, ts=1464282657657, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,661 INFO  [AM.ZK.Worker-pool2-t28] master.RegionStates: Onlined f07d37734f7d70dd47f1545345b772e9 on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,661 INFO  [AM.ZK.Worker-pool2-t28] master.RegionStates: Offlined f07d37734f7d70dd47f1545345b772e9 from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=44e206d15b62ed4d452545242bd105cd, current_state={44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 INFO  [AM.ZK.Worker-pool2-t29] master.RegionStates: Transitioned {44e206d15b62ed4d452545242bd105cd state=OPENING, ts=1464282657646, server=fsdata1c.corp.arc.com,60020,1464282355492} to {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,701 DEBUG [AM.ZK.Worker-pool2-t29] handler.OpenedRegionHandler: Handling OPENED of 44e206d15b62ed4d452545242bd105cd from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t29] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node 44e206d15b62ed4d452545242bd105cd in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,705 DEBUG [AM.ZK.Worker-pool2-t38] master.AssignmentManager: Znode CUTOFF4,O11\x09166343\x093\x09162830813,1464012806340.44e206d15b62ed4d452545242bd105cd. deleted, state: {44e206d15b62ed4d452545242bd105cd state=OPEN, ts=1464282657701, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,705 INFO  [AM.ZK.Worker-pool2-t38] master.RegionStates: Onlined 44e206d15b62ed4d452545242bd105cd on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,705 INFO  [AM.ZK.Worker-pool2-t38] master.RegionStates: Offlined 44e206d15b62ed4d452545242bd105cd from fsdata2c.corp.arc.com,60020,1464282352148
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Handling RS_ZK_REGION_OPENED, server=fsdata1c.corp.arc.com,60020,1464282355492, region=bd20eeb125be62c29b0e19960472e76d, current_state={bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Transitioned {bd20eeb125be62c29b0e19960472e76d state=OPENING, ts=1464282657576, server=fsdata1c.corp.arc.com,60020,1464282355492} to {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,711 DEBUG [AM.ZK.Worker-pool2-t30] handler.OpenedRegionHandler: Handling OPENED of bd20eeb125be62c29b0e19960472e76d from fsdata1c.corp.arc.com,60020,1464282355492; deleting unassigned node
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] zookeeper.ZKAssign: master:60000-0x354ee0630230000, quorum=fsdata2c.corp.arc.com:2181,fsdata1c.corp.arc.com:2181,fsmaster1c.corp.arc.com:2181, baseZNode=/hbase-unsecure Deleted unassigned node bd20eeb125be62c29b0e19960472e76d in expected state RS_ZK_REGION_OPENED
2016-05-26 17:10:57,714 DEBUG [AM.ZK.Worker-pool2-t30] master.AssignmentManager: Znode MONO,,1450265445296.bd20eeb125be62c29b0e19960472e76d. deleted, state: {bd20eeb125be62c29b0e19960472e76d state=OPEN, ts=1464282657711, server=fsdata1c.corp.arc.com,60020,1464282355492}
2016-05-26 17:10:57,714 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Onlined bd20eeb125be62c29b0e19960472e76d on fsdata1c.corp.arc.com,60020,1464282355492
2016-05-26 17:10:57,714 INFO  [AM.ZK.Worker-pool2-t30] master.RegionStates: Offlined bd20eeb125be62c29b0e19960472e76d from fsdata3c.corp.arc.com,60020,1464282353206
2016-05-26 17:13:42,930 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata1c.corp.arc.com%2C60020%2C1464281939497.1464281944440
2016-05-26 17:13:42,934 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata2c.corp.arc.com%2C60020%2C1464281948047.1464281950096
2016-05-26 17:13:42,938 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281944289
2016-05-26 17:13:42,941 DEBUG [master:fsmaster1c:60000.oldLogCleaner] master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: fsdata3c.corp.arc.com%2C60020%2C1464281938965.1464281946729.meta



&lt;/PRE&gt;</description>
      <pubDate>Fri, 27 May 2016 01:18:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104212#M29889</guid>
      <dc:creator>raja_ray</dc:creator>
      <dc:date>2016-05-27T01:18:22Z</dc:date>
    </item>
    <item>
      <title>Re: Unable to execute hbase job due to  Unexpected closed connection exception</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104213#M29890</link>
      <description>&lt;P&gt;When you see error again, find the corresponding server log and attach snippet.&lt;/P&gt;&lt;P&gt;What may be happening is that there hasn't been KeyValue whose size is greater than the new thresold.&lt;/P&gt;</description>
      <pubDate>Fri, 27 May 2016 01:21:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Unable-to-execute-hbase-job-due-to-Unexpected-closed/m-p/104213#M29890</guid>
      <dc:creator>tyu</dc:creator>
      <dc:date>2016-05-27T01:21:04Z</dc:date>
    </item>
  </channel>
</rss>

