2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler= 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-642.6.2.el6.x86_64 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hbase 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase 2017-03-21 14:06:24,535 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase 2017-03-21 14:06:24,536 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1.example.com:2181,hadoop3.example.com:2181,hadoop2.example.com:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@383bfa16 2017-03-21 14:06:24,556 INFO [main-SendThread(hadoop2.example.com:2181)] zookeeper.Login: successfully logged in. 2017-03-21 14:06:24,558 INFO [Thread-4] zookeeper.Login: TGT refresh thread started. 2017-03-21 14:06:24,563 INFO [main-SendThread(hadoop2.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2017-03-21 14:06:24,572 INFO [Thread-4] zookeeper.Login: TGT valid starting at: Tue Mar 21 11:36:22 EDT 2017 2017-03-21 14:06:24,572 INFO [Thread-4] zookeeper.Login: TGT expires: Wed Mar 22 11:36:22 EDT 2017 2017-03-21 14:06:24,572 INFO [Thread-4] zookeeper.Login: TGT refresh sleeping until: Wed Mar 22 07:45:48 EDT 2017 2017-03-21 14:06:24,579 INFO [main-SendThread(hadoop2.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop2.example.com/10.100.44.16:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2017-03-21 14:06:24,584 INFO [main-SendThread(hadoop2.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to hadoop2.example.com/10.100.44.16:2181, initiating session 2017-03-21 14:06:24,589 INFO [main-SendThread(hadoop2.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop2.example.com/10.100.44.16:2181, sessionid = 0x259f0e00a8f1274, negotiated timeout = 40000 2017-03-21 14:06:25,721 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2017-03-21 14:06:25,753 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2fd1731c connecting to ZooKeeper ensemble=hadoop1.example.com:2181,hadoop3.example.com:2181,hadoop2.example.com:2181 2017-03-21 14:06:25,753 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1.example.com:2181,hadoop3.example.com:2181,hadoop2.example.com:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@5ae76500 2017-03-21 14:06:25,754 INFO [main-SendThread(hadoop3.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2017-03-21 14:06:25,754 INFO [main-SendThread(hadoop3.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop3.example.com/10.100.44.19:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2017-03-21 14:06:25,755 INFO [main-SendThread(hadoop3.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to hadoop3.example.com/10.100.44.19:2181, initiating session 2017-03-21 14:06:25,757 INFO [main-SendThread(hadoop3.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop3.example.com/10.100.44.19:2181, sessionid = 0x359f0e005e412b8, negotiated timeout = 40000 2017-03-21 14:06:25,765 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=TokenUtil-getAuthToken connecting to ZooKeeper ensemble=hadoop1.example.com:2181,hadoop3.example.com:2181,hadoop2.example.com:2181 2017-03-21 14:06:25,766 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1.example.com:2181,hadoop3.example.com:2181,hadoop2.example.com:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@54709809 2017-03-21 14:06:25,767 INFO [main-SendThread(hadoop1.example.com:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2017-03-21 14:06:25,767 INFO [main-SendThread(hadoop1.example.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop1.example.com/10.100.44.17:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2017-03-21 14:06:25,768 INFO [main-SendThread(hadoop1.example.com:2181)] zookeeper.ClientCnxn: Socket connection established to hadoop1.example.com/10.100.44.17:2181, initiating session 2017-03-21 14:06:25,770 INFO [main-SendThread(hadoop1.example.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop1.example.com/10.100.44.17:2181, sessionid = 0x159f0dffe4e11ca, negotiated timeout = 40000 2017-03-21 14:06:25,778 INFO [main] zookeeper.ZooKeeper: Session: 0x159f0dffe4e11ca closed 2017-03-21 14:06:25,778 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-03-21 14:06:25,811 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x359f0e005e412b8 2017-03-21 14:06:25,812 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-03-21 14:06:25,812 INFO [main] zookeeper.ZooKeeper: Session: 0x359f0e005e412b8 closed 2017-03-21 14:06:25,833 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x259f0e00a8f1274 2017-03-21 14:06:25,834 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down 2017-03-21 14:06:25,834 INFO [main] zookeeper.ZooKeeper: Session: 0x259f0e00a8f1274 closed 2017-03-21 14:06:26,139 INFO [main] impl.TimelineClientImpl: Timeline service address: http://hadoop2.example.com:8188/ws/v1/timeline/ 2017-03-21 14:06:26,275 INFO [main] client.AHSProxy: Connecting to Application History server at hadoop2.example.com/10.100.44.16:10200 2017-03-21 14:06:26,430 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2017-03-21 14:06:26,463 WARN [main] ipc.Client: Failed to connect to server: hadoop2.example.com/10.100.44.16:8032: retries get failed due to exceeded maximum allowed retries number: 0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618) at org.apache.hadoop.ipc.Client.call(Client.java:1449) at org.apache.hadoop.ipc.Client.call(Client.java:1396) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy23.getNewApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:221) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at com.sun.proxy.$Proxy24.getNewApplication(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:225) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:233) at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:188) at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:231) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:153) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:721) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:725) 2017-03-21 14:06:26,471 INFO [main] client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 2017-03-21 14:06:26,506 INFO [main] hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 129 for hive on ha-hdfs:EXAMPLE-HA 2017-03-21 14:06:26,625 INFO [main] security.TokenCache: Got dt for hdfs://EXAMPLE-HA; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:EXAMPLE-HA, Ident: (HDFS_DELEGATION_TOKEN token 129 for hive) 2017-03-21 14:06:26,625 WARN [main] token.Token: Cannot find class for token kind kms-dt 2017-03-21 14:06:26,625 INFO [main] security.TokenCache: Got dt for hdfs://EXAMPLE-HA; Kind: kms-dt, Service: 10.100.44.17:9292, Ident: 00 04 68 69 76 65 04 79 61 72 6e 00 8a 01 5a f2 0c 73 01 8a 01 5b 16 18 f7 01 1a 33 2017-03-21 14:06:27,300 INFO [main] input.FileInputFormat: Total input paths to process : 1 2017-03-21 14:06:27,356 INFO [main] mapreduce.JobSubmitter: number of splits:1 2017-03-21 14:06:27,366 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2017-03-21 14:06:27,441 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1490118831477_0002 2017-03-21 14:06:27,442 WARN [main] token.Token: Cannot find class for token kind kms-dt 2017-03-21 14:06:27,442 WARN [main] token.Token: Cannot find class for token kind kms-dt Kind: kms-dt, Service: 10.100.44.17:9292, Ident: 00 04 68 69 76 65 04 79 61 72 6e 00 8a 01 5a f2 0c 73 01 8a 01 5b 16 18 f7 01 1a 33 2017-03-21 14:06:27,442 INFO [main] mapreduce.JobSubmitter: Kind: HBASE_AUTH_TOKEN, Service: f7a4c2e0-4325-432b-b6cf-83a8e7122780, Ident: (org.apache.hadoop.hbase.security.token.AuthenticationTokenIdentifier@6) 2017-03-21 14:06:27,444 INFO [main] mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:EXAMPLE-HA, Ident: (HDFS_DELEGATION_TOKEN token 129 for hive) 2017-03-21 14:06:28,137 INFO [main] impl.YarnClientImpl: Submitted application application_1490118831477_0002 2017-03-21 14:06:28,176 INFO [main] mapreduce.Job: The url to track the job: http://hadoop3.example.com:8088/proxy/application_1490118831477_0002/ 2017-03-21 14:06:28,177 INFO [main] mapreduce.Job: Running job: job_1490118831477_0002 2017-03-21 14:06:36,269 INFO [main] mapreduce.Job: Job job_1490118831477_0002 running in uber mode : false 2017-03-21 14:06:36,270 INFO [main] mapreduce.Job: map 0% reduce 0% 2017-03-21 14:06:45,345 INFO [main] mapreduce.Job: Task Id : attempt_1490118831477_0002_m_000000_0, Status : FAILED Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1940 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family id does not exist in region besthbase,,1490117043490.f816b4ccfbd445cbb56f827df5213d1d. in table 'besthbase', {NAME => 'f1', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f3', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f4', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f5', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f6', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f7', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f8', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:722) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:677) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2054) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) : 1940 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:214) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1751) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:138) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:165) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 2017-03-21 14:06:52,390 INFO [main] mapreduce.Job: Task Id : attempt_1490118831477_0002_m_000000_1, Status : FAILED Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1940 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family id does not exist in region besthbase,,1490117043490.f816b4ccfbd445cbb56f827df5213d1d. in table 'besthbase', {NAME => 'f1', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f3', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f4', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f5', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f6', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f7', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f8', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:722) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:677) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2054) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) : 1940 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:214) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1751) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:138) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:165) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 2017-03-21 14:06:58,412 INFO [main] mapreduce.Job: Task Id : attempt_1490118831477_0002_m_000000_2, Status : FAILED Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1940 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family id does not exist in region besthbase,,1490117043490.f816b4ccfbd445cbb56f827df5213d1d. in table 'besthbase', {NAME => 'f1', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f3', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f4', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f5', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f6', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f7', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'f8', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:722) at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:677) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2054) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) : 1940 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:234) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:214) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1751) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:138) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:165) at org.apache.hadoop.hbase.mapreduce.TsvImporterMapper.map(TsvImporterMapper.java:45) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 2017-03-21 14:07:04,432 INFO [main] mapreduce.Job: map 100% reduce 0% 2017-03-21 14:07:04,437 INFO [main] mapreduce.Job: Job job_1490118831477_0002 failed with state FAILED due to: Task failed task_1490118831477_0002_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 2017-03-21 14:07:04,515 INFO [main] mapreduce.Job: Counters: 9 Job Counters Failed map tasks=4 Launched map tasks=4 Other local map tasks=3 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=21363 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=21363 Total vcore-milliseconds taken by all map tasks=21363 Total megabyte-milliseconds taken by all map tasks=32813568 [hbase@hadoop1 ~]$