Member since
07-02-2018
26
Posts
0
Kudos Received
0
Solutions
01-29-2019
04:31 AM
Hbase master stopped after running for few seconds.region servers are running . hbase master logs: WARN [slnxhadoop01:16000.activeMasterManager] master.SplitLogManager: error while splitting logs in [hdfs://slnxhadoop01.noid.in.sopra:8020/apps/hbase/data/WALs/slnxhadoop04.dhcp.noid.in.sopra,16020,1536839826364-splitting] installed = 1 but only 0 done
2019-01-10 16:48:57,344 FATAL [slnxhadoop01:16000.activeMasterManager] master.HMaster: Failed to become active master
java.io.IOException: error or interrupted while splitting logs in [hdfs://slnxhadoop01.noid.in.sopra:8020/apps/hbase/data/WALs/slnxhadoop04.dhcp.noid.in.sopra,16020,1536839826364-splitting] Task = installed = 1 done = 0 error = 1
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:429)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:339)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:330)
at org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1203)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:806)
at org.apache.hadoop.hbase.master.HMaster.access$900(HMaster.java:225)
at org.apache.hadoop.hbase.master.HMaster$3.run(HMaster.java:2038)
at java.lang.Thread.run(Thread.java:745)
2019-01-10 16:48:57,345 FATAL [slnxhadoop01:16000.activeMasterManager] master.HMaster: Master server abort: loaded coprocessors are: []
2019-01-10 16:48:57,345 FATAL [slnxhadoop01:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: error or interrupted while splitting logs in [hdfs://slnxhadoop01.noid.in.sopra:8020/apps/hbase/data/WALs/slnxhadoop04.dhcp.noid.in.sopra,16020,1536839826364-splitting] Task = installed = 1 done = 0 error = 1
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:429)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:339)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitMetaLog(MasterFileSystem.java:330)
at org.apache.hadoop.hbase.master.HMaster.splitMetaLogBeforeAssignment(HMaster.java:1203)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:806)
at org.apache.hadoop.hbase.master.HMaster.access$900(HMaster.java:225)
at org.apache.hadoop.hbase.master.HMaster$3.run(HMaster.java:2038)
at java.lang.Thread.run(Thread.java:745)
2019-01-10 16:48:57,345 INFO [slnxhadoop01:16000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
2019-01-10 16:48:57,346 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] regionserver.HRegionServer: Stopping infoServer
2019-01-10 16:48:57,400 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:16010
2019-01-10 16:48:57,403 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] procedure2.ProcedureExecutor: Stopping the procedure executor
2019-01-10 16:48:57,403 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] wal.WALProcedureStore: Stopping the WAL Procedure Store
2019-01-10 16:48:57,417 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] regionserver.HRegionServer: stopping server slnxhadoop01.noid.in.sopra,16000,1547117834090
2019-01-10 16:48:57,417 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x36691ecadd9004c
2019-01-10 16:48:57,422 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] zookeeper.ZooKeeper: Session: 0x36691ecadd9004c closed
2019-01-10 16:48:57,422 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000-EventThread] zookeeper.ClientCnxn: EventThread shut down
2019-01-10 16:48:57,423 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] regionserver.HRegionServer: stopping server slnxhadoop01.noid.in.sopra,16000,1547117834090; all regions closed.
2019-01-10 16:48:57,423 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] hbase.ChoreService: Chore service for: slnxhadoop01.noid.in.sopra,16000,1547117834090 had [[ScheduledChore: Name: HFileCleaner Period: 60000 Unit: MILLISECONDS], [ScheduledChore: Name: LogsCleaner Period: 60000 Unit: MILLISECONDS]] on shutdown
2019-01-10 16:48:57,427 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x36691ecadd9004d
2019-01-10 16:48:57,428 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] zookeeper.ZooKeeper: Session: 0x36691ecadd9004d closed
2019-01-10 16:48:57,428 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] hbase.ChoreService: Chore service for: slnxhadoop01.noid.in.sopra,16000,1547117834090_splitLogManager_ had [[ScheduledChore: Name: SplitLogManager Timeout Monitor Period: 1000 Unit: MILLISECONDS]] on shutdown
2019-01-10 16:48:57,428 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] flush.MasterFlushTableProcedureManager: stop: server shutting down.
2019-01-10 16:48:57,428 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] ipc.RpcServer: Stopping server on 16000
2019-01-10 16:48:57,428 INFO [slnxhadoop01:16000.activeMasterManager-EventThread] zookeeper.ClientCnxn: EventThread shut down
2019-01-10 16:48:57,429 INFO [RpcServer.listener,port=16000] ipc.RpcServer: RpcServer.listener,port=16000: stopping
2019-01-10 16:48:57,430 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2019-01-10 16:48:57,430 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2019-01-10 16:48:57,436 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] zookeeper.RecoverableZooKeeper: Node /hbase-unsecure/rs/slnxhadoop01.noid.in.sopra,16000,1547117834090 already deleted, retry=false
2019-01-10 16:48:57,437 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] zookeeper.ZooKeeper: Session: 0x36691ecadd9004b closed
2019-01-10 16:48:57,437 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] regionserver.HRegionServer: stopping server slnxhadoop01.noid.in.sopra,16000,1547117834090; zookeeper connection closed.
2019-01-10 16:48:57,437 INFO [master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000] regionserver.HRegionServer: master/slnxhadoop01.noid.in.sopra/172.26.50.102:16000 exiting
2019-01-10 16:48:57,438 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
... View more
Labels:
- Labels:
-
Apache HBase
09-06-2018
10:19 AM
I am trying to upgrade HDP 2.5.3 to 2.6.5 and while registering version HDP 2.6.5 ,save button is getting disabled. I have enabled Skip Repository Base URL validation but still save is disabled. I am using ambari 2.6.2.2 and have uploaded version definition file. hdp-265-register-error.png
... View more
Labels:
08-31-2018
10:12 AM
@Jay Kumar SenSharma Thanks for the reply. I followed steps given in https://community.hortonworks.com/articles/79327/re-initializing-apache-accumulo-under-hdp.html and it worked.
... View more
08-31-2018
09:28 AM
@Jay Kumar SenSharma Thanks for the reply. I tried to drop table testtable but getting error : Thread "shell" stuck on IO to slnxhadoop03.dhcp.noid.in:9999 (0) for at least 120348 ms I am using HDP 2.5.3 and ambari version 2.4.2.0.
... View more
08-31-2018
07:17 AM
Accumulo is not showing any alert but accumulo.pngaccumulo-error.png Accumulo service check is getting failed with error: ERROR: org.apache.accumulo.core.client.TableExistsException: Table testtable exists
2018-08-30 17:10:46,508 [shell.Shell] ERROR: java.lang.IllegalStateException: Not in a table context. Please use 'table <tableName>' to switch to a table, or use '-t' to specify a table if option is available.
2018-08-30 17:10:46,509 [shell.Shell] ERROR: java.lang.IllegalStateException: Not in a table context. Please use 'table <tableName>' to switch to a table, or use '-t' to specify a table if option is available.
2018-08-30 17:10:46,509 [shell.Shell] ERROR: java.lang.IllegalStateException: Not in a table context. Please use 'table <tableName>' to switch to a table, or use '-t' to specify a table if option is available.
2018-08-30 17:10:46,509 [shell.Shell] ERROR: java.lang.IllegalStateException: Not in a table context. Please use 'table <tableName>' to switch to a table, or use '-t' to specify a table if option is available.
... View more
Labels:
- Labels:
-
Apache Accumulo
08-24-2018
10:12 AM
@Jordan Moore Thanks for the help.
... View more
08-22-2018
11:54 AM
I am trying to pull data from mysql and I am using kafka provided by ambari.I am new to kafka and have few doubts. 1. where I can find logs for running kafka connect cluster and debezium connectors? 2. I am not using confluent, do i need to configure schema registry and why it is used?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
08-21-2018
11:35 AM
@Jordan Moore Thanks for the quick reply. However, I am currently using Kafka version 0.10.0.2.5 , how can I give Debezium mysql connector path. I am getting error: ERROR Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:183)
org.apache.kafka.connect.errors.ConnectException: Failed to find any class that implements Connector andwhich name matches io.debezium.connector.mysql.MySqlConnector available connectors are: org.apache.kafka.connect.sink.SinkConnector,
org.apache.kafka.connect.tools.VerifiableSourceConnector,
org.apache.kafka.connect.file.FileStreamSinkConnector,
org.apache.kafka.connect.file.FileStreamSourceConnector,
org.apache.kafka.connect.source.SourceConnector,
org.apache.kafka.connect.tools.VerifiableSinkConnector,
org.apache.kafka.connect.tools.MockSourceConnector,
org.apache.kafka.connect.tools.MockConnector,
org.apache.kafka.connect.tools.MockSinkConnector
... View more
08-20-2018
05:02 PM
I am new to kafka and i am trying to use get data from mysql to kafka broker using debezium mysql connector?I am not able to understand how to run kafka connect in distributed mode to use debezium mysql connector.
... View more
Labels:
- Labels:
-
Apache Kafka
08-07-2018
12:14 PM
Hi @Akhil S Naik Instead of removing it from database can I add new host with new hostname and remove host existing earlier.
... View more