Member since
04-26-2019
16
Posts
0
Kudos Received
0
Solutions
05-14-2019
09:27 AM
Hi Jay Kumar SenSharma I also think it's a reason because I'm currently using 3 machines into a cluster and installing kafka on all 3 nodes Thank you verymuch !
... View more
05-10-2019
05:40 PM
Hi everyone , I use Hortonwork 3.1.1 but kafka broker do not start, Here is the log i have in /var/log/kafka/server.log
[2019-05-10 16:00:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-05-10 16:07:25,008] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-05-10 16:07:25,518] INFO starting (kafka.server.KafkaServer)
[2019-05-10 16:07:25,519] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer)
[2019-05-10 16:07:25,532] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:07:25,551] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:07:25,616] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:07:25,881] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer)
[2019-05-10 16:07:25,950] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:07:25,956] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:07:25,982] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:25,982] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:25,983] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:26,012] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:958)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-05-10 16:07:26,014] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:07:26,017] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:07:26,020] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:07:26,020] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,982] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,982] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:07:27,988] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer)
[2019-05-10 16:07:27,988] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-05-10 16:07:27,990] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:10:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-05-10 16:11:34,264] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-05-10 16:11:34,787] INFO starting (kafka.server.KafkaServer)
[2019-05-10 16:11:34,788] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer)
[2019-05-10 16:11:34,801] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:11:34,820] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:11:34,893] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:11:35,122] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer)
[2019-05-10 16:11:35,200] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:11:35,209] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:11:35,236] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:35,236] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:35,237] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:35,269] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:958)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-05-10 16:11:35,271] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:11:35,274] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:11:35,277] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:11:35,278] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,238] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,238] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:11:37,245] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer)
[2019-05-10 16:11:37,245] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-05-10 16:11:37,248] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:16:16,467] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-05-10 16:16:16,994] INFO starting (kafka.server.KafkaServer)
[2019-05-10 16:16:16,995] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer)
[2019-05-10 16:16:17,009] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:16:17,028] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:16:17,088] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:16:17,345] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer)
[2019-05-10 16:16:17,424] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:16:17,431] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:16:17,458] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:17,458] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:17,459] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:17,492] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:958)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-05-10 16:16:17,494] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:16:17,497] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:16:17,501] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:16:17,501] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,460] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,460] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:16:19,465] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer)
[2019-05-10 16:16:19,466] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-05-10 16:16:19,468] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:20:51,712] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-05-10 16:21:18,284] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-05-10 16:21:18,803] INFO starting (kafka.server.KafkaServer)
[2019-05-10 16:21:18,804] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer)
[2019-05-10 16:21:18,818] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:21:18,838] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:21:18,904] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:21:19,210] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer)
[2019-05-10 16:21:19,288] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:21:19,296] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.initial.rebalance.delay.ms = 3000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://am-bigdata-01.am.local:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 600000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 6667
principal.builder.class = null
producer.metrics.cache.entry.expiration.ms = 300000
producer.metrics.cache.max.size = 1000
producer.metrics.enable = false
producer.purgatory.purge.interval.requests = 10000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location =
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 2
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 3
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181
zookeeper.connection.timeout.ms = 25000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-05-10 16:21:19,325] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:19,325] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:19,326] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:19,358] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240)
at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:958)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-05-10 16:21:19,361] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
[2019-05-10 16:21:19,363] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:21:19,367] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-05-10 16:21:19,368] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:20,325] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:20,325] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:20,326] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,326] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,326] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-05-10 16:21:21,331] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer)
[2019-05-10 16:21:21,331] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-05-10 16:21:21,333] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
Please Help me
... View more
Labels:
05-10-2019
03:59 PM
hi @Geoffrey Shelton Okot My kafka service is currently not working , Can you help me please ? Here is the log on : /var/log/kafka/server.log and /var/log/kafka/kafka.err [2019-05-10 16:00:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-10 16:07:25,008] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:07:25,518] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:07:25,519] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:07:25,532] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:07:25,551] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:07:25,616] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:07:25,881] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:07:25,950] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:07:25,956] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:07:25,982] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:25,982] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:25,983] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:26,012] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:07:26,014] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:07:26,017] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:07:26,020] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:07:26,020] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:26,982] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,982] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,982] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,983] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:07:27,988] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:07:27,988] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:07:27,990] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:10:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-10 16:11:34,264] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:11:34,787] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:11:34,788] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:11:34,801] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:11:34,820] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:11:34,893] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:11:35,122] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:11:35,200] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:11:35,209] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:11:35,236] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:35,236] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:35,237] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:35,269] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:11:35,271] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:11:35,274] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:11:35,277] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:11:35,278] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:36,237] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,237] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,238] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,238] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:11:37,245] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:11:37,245] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:11:37,248] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:16:16,467] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:16:16,994] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:16:16,995] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:16:17,009] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:16:17,028] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:16:17,088] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:16:17,345] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:16:17,424] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:16:17,431] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:16:17,458] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:17,458] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:17,459] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:17,492] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:16:17,494] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:16:17,497] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:16:17,501] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:16:17,501] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:18,459] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,459] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,460] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,460] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:16:19,465] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:16:19,466] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:16:19,468] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:20:51,712] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-10 16:21:18,284] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:21:18,803] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:21:18,804] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:21:18,818] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:21:18,838] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:21:18,904] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:21:19,210] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:21:19,288] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:21:19,296] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:21:19,325] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:19,325] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:19,326] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:19,358] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:21:19,361] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:21:19,363] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:21:19,367] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:21:19,368] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:20,325] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:20,325] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:20,326] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,325] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,326] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,326] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:21:21,331] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:21:21,331] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:21:21,333] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:30:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-10 16:36:58,207] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:36:58,757] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:36:58,758] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:36:58,777] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:36:58,804] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:36:58,863] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:36:59,129] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:36:59,204] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:36:59,212] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:36:59,237] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:36:59,237] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:36:59,238] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:36:59,268] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:36:59,271] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:36:59,273] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:36:59,276] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:36:59,277] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:00,238] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:00,238] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:00,239] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,238] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,238] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,238] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,239] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,239] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:37:01,245] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:37:01,246] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:37:01,248] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:40:51,711] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-10 16:41:59,912] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2019-05-10 16:42:00,411] INFO starting (kafka.server.KafkaServer) [2019-05-10 16:42:00,412] INFO Connecting to zookeeper on am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 (kafka.server.KafkaServer) [2019-05-10 16:42:00,425] INFO [ZooKeeperClient] Initializing a new session to am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:42:00,444] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:42:00,505] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:42:00,779] INFO Cluster ID = z-4P_uf-RzmpT2QvMnOD2g (kafka.server.KafkaServer) [2019-05-10 16:42:00,846] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:42:00,853] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = -1 broker.id.generation.enable = true broker.rack = null client.quota.callback.class = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 10000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 2.0-IV1 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = PLAINTEXT://am-bigdata-01.am.local:6667 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /kafka-logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 2.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 600000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1000000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 86400000 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 3 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null port = 6667 principal.builder.class = null producer.metrics.cache.entry.expiration.ms = 300000 producer.metrics.cache.max.size = 1000 producer.metrics.enable = false producer.purgatory.purge.interval.requests = 10000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [GSSAPI] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism.inter.broker.protocol = GSSAPI sasl.server.callback.handler.class = null security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = [hidden] ssl.keymanager.algorithm = SunX509 ssl.keystore.location = ssl.keystore.password = [hidden] ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = ssl.truststore.password = [hidden] ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 2 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 3 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 zookeeper.connection.timeout.ms = 25000 zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2019-05-10 16:42:00,879] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:00,879] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:00,880] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:00,909] ERROR [KafkaServer id=1001] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /kafka-logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.<init>(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:958) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2019-05-10 16:42:00,912] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer) [2019-05-10 16:42:00,914] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:42:00,917] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient) [2019-05-10 16:42:00,918] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:01,880] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:01,880] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:01,880] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,879] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,879] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,880] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,880] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,880] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2019-05-10 16:42:02,885] INFO [KafkaServer id=1001] shut down completed (kafka.server.KafkaServer) [2019-05-10 16:42:02,885] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2019-05-10 16:42:02,887] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
... View more
05-10-2019
07:04 AM
Thank Geoffrey Shelton Okot I have fixed that error . Thank you !
... View more
05-04-2019
02:34 AM
Hi @geoffrey Shelton Okot i use command " $ hdfs dfs -chown -R mapred:hadoop /mr-history" and mapreduce Service has worked normally, but YARN service still failed, Timeline Service V2.0 Stopped. I have attached the image below
... View more
05-03-2019
03:51 PM
@Geoffrey Shelton Okot i use run service check on Yarn and here is the logs i receiver : Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/service_check.py", line 185, in <module> ServiceCheck().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/service_check.py", line 121, in service_check user=params.smokeuser, File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000 --queue default' returned 1. 19/05/03 06:20:03 INFO distributedshell.Client: Initializing Client 19/05/03 06:20:03 INFO distributedshell.Client: Running Client 19/05/03 06:20:03 INFO client.RMProxy: Connecting to ResourceManager at bigdata-01.am.local/172.16.8.2:8050 19/05/03 06:20:04 INFO client.AHSProxy: Connecting to Application History server at bigdata-02.am.local/172.16.8.3:10200 19/05/03 06:20:04 INFO distributedshell.Client: Got Cluster metric info from ASM, numNodeManagers=1 19/05/03 06:20:04 INFO distributedshell.Client: Got Cluster node info from ASM 19/05/03 06:20:04 INFO distributedshell.Client: Got node report from ASM for, nodeId=bigdata-03.am.local:45454, nodeAddress=bigdata-03.am.local:8042, nodeRackName=/default-rack, nodeNumContainers=0 19/05/03 06:20:04 INFO distributedshell.Client: Queue info, queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, queueChildQueueCount=0 19/05/03 06:20:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS 19/05/03 06:20:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=root, userAcl=ADMINISTER_QUEUE 19/05/03 06:20:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=default, userAcl=SUBMIT_APPLICATIONS 19/05/03 06:20:04 INFO distributedshell.Client: User ACL Info for Queue, queueName=default, userAcl=ADMINISTER_QUEUE 19/05/03 06:20:04 INFO distributedshell.Client: Max mem capability of resources in this cluster 9216 19/05/03 06:20:04 INFO distributedshell.Client: Max virtual cores capability of resources in this cluster 6 19/05/03 06:20:04 WARN distributedshell.Client: AM Memory not specified, use 100 mb as AM memory 19/05/03 06:20:04 WARN distributedshell.Client: AM vcore not specified, use 1 mb as AM vcores 19/05/03 06:20:04 WARN distributedshell.Client: AM Resource capability=<memory:100, vCores:1> 19/05/03 06:20:04 INFO distributedshell.Client: Copy App Master jar from local filesystem and add to local environment 19/05/03 06:20:05 ERROR distributedshell.Client: Error running Client org.apache.hadoop.security.AccessControlException: Permission denied: user=ambari-qa, access=EXECUTE, inode="/user/ambari-qa/DistributedShell":root:hdfs:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1800) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:315) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2407) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:278) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:537) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:534) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:548) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:475) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:414) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:387) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2369) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2335) at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2298) at org.apache.hadoop.yarn.applications.distributedshell.Client.addToLocalResources(Client.java:1099) at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:769) at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:265) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=ambari-qa, access=EXECUTE, inode="/user/ambari-qa/DistributedShell":root:hdfs:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:350) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1800) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:315) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2407) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy14.create(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:362) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy15.create(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:273) ... 19 more stdout: /var/lib/ambari-agent/data/output-2240.txt 2019-05-03 06:20:01,095 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-03 06:20:01,096 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78 2019-05-03 06:20:01,122 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf 2019-05-03 06:20:01,134 - HdfsResource['/user/ambari-qa'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://bigdata-01.am.local:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0770} 2019-05-03 06:20:01,136 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://bigdata-01.am.local:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp3Xsjzn 2>/tmp/tmp4GymcS''] {'logoutput': None, 'quiet': False} 2019-05-03 06:20:01,780 - call returned (0, '') 2019-05-03 06:20:01,781 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":4,"fileId":16388,"group":"hdfs","length":0,"modificationTime":1556838644161,"owner":"ambari-qa","pathSuffix":"","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'') 2019-05-03 06:20:01,784 - checked_call['yarn org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command ls -num_containers 1 -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -timeout 300000 --queue default'] {'path': '/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', 'user': 'ambari-qa'}
Command failed after 1 tries
... View more
05-03-2019
06:07 AM
hi Geoffrey Shelton Okot here is the below recent logs you need but i can't upload in here because is too large , you can download on here or can you give me your mail, i will send it to you . Link : https://www.fshare.vn/file/Y38M7S51FSGK?token=1556863604 Many thanks for your help
... View more
05-02-2019
11:19 PM
Hi Everyone I use Hortonwork 3.1.1 on Centos 7, everything start nomarlly after install, but yesterday service Yarn and Mapreduce stop, i try to restart but after few second it automatically stop. Please help me ! Here is Log on /var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log 2019-04-27 15:57:23,150 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1554293667897_0131,name=JavaHBaseDistributedScan demo_kafka,user=hbase,queue=default,state=FINISHED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1554293667897_0131/,appMasterHost=N/A,submitTime=1555412153616,startTime=1555412153617,finishTime=1555412160200,finalStatus=SUCCEEDED,memorySeconds=18035,vcoreSeconds=10,preemptedMemorySeconds=18035,preemptedVcoreSeconds=10,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=18035 MB-seconds\, 10 vcore-seconds,preemptedResourceSeconds=18035 MB-seconds\, 10 vcore-seconds 2019-04-27 15:57:23,153 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1554293667897_0132,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1554293667897_0132/,appMasterHost=N/A,submitTime=1555590937105,startTime=1555590937205,finishTime=1556006590180,finalStatus=FAILED,memorySeconds=425628448,vcoreSeconds=415652,preemptedMemorySeconds=425628448,preemptedVcoreSeconds=415652,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=425628448 MB-seconds\, 415652 vcore-seconds,preemptedResourceSeconds=425628448 MB-seconds\, 415652 vcore-seconds 2019-04-27 15:57:23,153 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1554293667897_0134,name=Wordcount Background,user=hdfs,queue=default,state=FINISHED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1554293667897_0134/,appMasterHost=N/A,submitTime=1555919241009,startTime=1555919241011,finishTime=1555930274213,finalStatus=SUCCEEDED,memorySeconds=56459868,vcoreSeconds=33083,preemptedMemorySeconds=56459868,preemptedVcoreSeconds=33083,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=56459868 MB-seconds\, 33083 vcore-seconds,preemptedResourceSeconds=56459868 MB-seconds\, 33083 vcore-seconds 2019-04-27 15:57:23,153 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0001,name=HIVE-d222fe43-47e8-4777-99eb-1d626db7b1a9,user=hive,queue=default,state=FINISHED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0001/,appMasterHost=N/A,submitTime=1556006598895,startTime=1556006598908,finishTime=1556007209543,finalStatus=SUCCEEDED,memorySeconds=1874359,vcoreSeconds=610,preemptedMemorySeconds=1874359,preemptedVcoreSeconds=610,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=TEZ,resourceSeconds=1874359 MB-seconds\, 610 vcore-seconds,preemptedResourceSeconds=1874359 MB-seconds\, 610 vcore-seconds 2019-04-27 15:57:23,153 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0002,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0002/,appMasterHost=N/A,submitTime=1556006610698,startTime=1556006610699,finishTime=1556046968256,finalStatus=FAILED,memorySeconds=40712387,vcoreSeconds=39758,preemptedMemorySeconds=40712387,preemptedVcoreSeconds=39758,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=40712387 MB-seconds\, 39758 vcore-seconds,preemptedResourceSeconds=40712387 MB-seconds\, 39758 vcore-seconds 2019-04-27 15:57:23,154 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0003,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0003/,appMasterHost=N/A,submitTime=1556050435549,startTime=1556050435552,finishTime=1556102048938,finalStatus=FAILED,memorySeconds=52852082,vcoreSeconds=51613,preemptedMemorySeconds=52852082,preemptedVcoreSeconds=51613,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=52852082 MB-seconds\, 51613 vcore-seconds,preemptedResourceSeconds=52852082 MB-seconds\, 51613 vcore-seconds 2019-04-27 15:57:23,155 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0004,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0004/,appMasterHost=N/A,submitTime=1556115260583,startTime=1556115260585,finishTime=1556126768579,finalStatus=FAILED,memorySeconds=11784135,vcoreSeconds=11507,preemptedMemorySeconds=11784135,preemptedVcoreSeconds=11507,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=11784135 MB-seconds\, 11507 vcore-seconds,preemptedResourceSeconds=11784135 MB-seconds\, 11507 vcore-seconds 2019-04-27 15:57:23,156 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0005,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0005/,appMasterHost=N/A,submitTime=1556136892217,startTime=1556136892219,finishTime=1556151248704,finalStatus=FAILED,memorySeconds=14700999,vcoreSeconds=14356,preemptedMemorySeconds=14700999,preemptedVcoreSeconds=14356,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=14700999 MB-seconds\, 14356 vcore-seconds,preemptedResourceSeconds=14700999 MB-seconds\, 14356 vcore-seconds 2019-04-27 15:57:23,157 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1556006587747_0006,name=Thrift JDBC/ODBC Server,user=spark,queue=default,state=FAILED,trackingUrl=http://bigdata-01.am.local:8088/proxy/application_1556006587747_0006/,appMasterHost=N/A,submitTime=1556158512111,startTime=1556158512113,finishTime=1556182808281,finalStatus=FAILED,memorySeconds=24879206,vcoreSeconds=24296,preemptedMemorySeconds=24879206,preemptedVcoreSeconds=24296,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=SPARK,resourceSeconds=24879206 MB-seconds\, 24296 vcore-seconds,preemptedResourceSeconds=24879206 MB-seconds\, 24296 vcore-seconds
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
05-02-2019
08:19 AM
hi @Geoffrey Shelton Okot I have to 1 question can you help me please ? Now i'm failed to start Mapreduce 2 and YARN, i try to restart but after few second it turn off automatically. Thank you
... View more
04-29-2019
06:56 PM
@Geoffrey Shelton Okot Thank you very much
... View more
04-26-2019
10:39 AM
Here is my Hbase log :
2019-04-26 10:39:06,273 INFO [main] master.HMaster: STARTING service HMaster
2019-04-26 10:39:06,274 INFO [main] util.VersionInfo: HBase 2.0.2.3.1.0.0-78
2019-04-26 10:39:06,274 INFO [main] util.VersionInfo: Source code repository git://ctr-e138-1518143905142-586755-01-000023.hwx.site/grid/0/jenkins/workspace/HDP-parallel-centos7/SOURCES/hbase revision=
2019-04-26 10:39:06,274 INFO [main] util.VersionInfo: Compiled by jenkins on Thu Dec 6 12:27:45 UTC 2018
2019-04-26 10:39:06,274 INFO [main] util.VersionInfo: From source with checksum 015c34650c163b249d16fc7e496a030e
2019-04-26 10:39:06,514 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-hbase
2019-04-26 10:39:06,514 INFO [main] util.ServerCommandLine: hbase.rootdir: /apps/hbase/data
2019-04-26 10:39:06,514 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2019-04-26 10:39:06,514 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: bigdata-03.am.local,bigdata-01.am.local,bigdata-02.am.local
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:PATH=/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_PID_DIR=/var/run/hbase
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_REGIONSERVER_OPTS= -Xms4096m -Xmx4096m -XX:ParallelGCThreads=8
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_CONF_DIR=/usr/hdp/current/hbase-master/conf
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/hbase
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:LD_LIBRARY_PATH=::/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:LOGNAME=hbase
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:PWD=/home/hbase
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:QT_GRAPHICSSYSTEM_CHECKED=1
2019-04-26 10:39:06,515 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:PHOENIX_QUERYSERVER_OPTS= -XX:ParallelGCThreads=8
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_MASTER_OPTS= -Xmx1024m -XX:ParallelGCThreads=8
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_REGIONSERVERS=/usr/hdp/current/hbase-master/conf/regionservers
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HADOOP_HOME=/usr/hdp/3.1.0.0-78/hadoop
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_OPTS=-Dhdp.version=3.1.0.0-78 -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:-ResizePLAB -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.io.tmpdir=/tmp -verbose:gc -XX:-PrintGCCause -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201904261039 -Xmx1024m -XX:ParallelGCThreads=8 -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-master-bigdata-01.am.local.log -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:SHLVL=3
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:ZOOKEEPER_HOME=/usr/hdp/3.1.0.0-78/zookeeper
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hbase-master-bigdata-01.am.local.log
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HISTSIZE=1000
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/usr/jdk64/jdk1.8.0_112
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:HDP_VERSION=3.1.0.0-78
2019-04-26 10:39:06,516 INFO [main] util.ServerCommandLine: env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=c2933
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/*:/usr/hdp/3.1.0.0-78/hadoop/lib/*:/usr/hdp/3.1.0.0-78/zookeeper/*:
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/var/run/hbase/hbase-hbase-master.znode
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-master-bigdata-01.am.local
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/var/log/hbase
2019-04-26 10:39:06,517 INFO [main] util.ServerCommandLine: env:USER=hbase
2019-04-26 10:39:06,518 INFO [main] util.ServerCommandLine: env:CLASSPATH=/usr/hdp/current/hbase-master/conf:/usr/jdk64/jdk1.8.0_112/lib/tools.jar:/usr/hdp/current/hbase-master/bin/..:/usr/hdp/current/hbase-master/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/aopalliance-repackaged-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/atlas-plugin-classloader-1.1.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/audience-annotations-0.5.0.jar:/usr/hdp/current/hbase-master/bin/../lib/avro-1.7.7.jar:/usr/hdp/current/hbase-master/bin/../lib/aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-beanutils-1.9.3.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-codec-1.10.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-collections-3.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-configuration2-2.1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-crypto-1.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-csv-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-io-2.5.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-lang3-3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-logging-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-math3-3.6.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-net-3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-client-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-framework-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-recipes-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/disruptor-3.3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/dnsjava-2.1.7.jar:/usr/hdp/current/hbase-master/bin/../lib/ehcache-3.3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-master/bin/../lib/fst-2.50.jar:/usr/hdp/current/hbase-master/bin/../lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/current/hbase-master/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-master/bin/../lib/guava-11.0.2.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-servlet-4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-backup-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-backup.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-bridge-shim-1.1.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-endpoint-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-endpoint.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-external-blockcache-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-external-blockcache.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-http-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-http.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-api-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-api.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-shaded-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-shaded.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-replication-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-replication.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-client-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-client.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-mapreduce-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-mapreduce.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-miscellaneous-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-netty-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-protobuf-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark-it-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-testing-util-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper.jar:/usr/hdp/current/hbase-master/bin/../lib/HikariCP-java7-2.4.12.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-api-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-locator-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-utils-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/htrace-core-3.2.0-incubating.jar:/usr/hdp/current/hbase-master/bin/../lib/htrace-core4-4.2.0-incubating.jar:/usr/hdp/current/hbase-master/bin/../lib/httpclient-4.5.3.jar:/usr/hdp/current/hbase-master/bin/../lib/httpcore-4.4.6.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-annotations-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-databind-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-1.9.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-paranamer-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-scala_2.11-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-xc-1.9.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jamon-runtime-2.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/javassist-3.20.0-GA.jar:/usr/hdp/current/hbase-master/bin/../lib/java-util-1.9.0.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.annotation-api-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.el-3.0.1-b08.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.inject-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet-api-3.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp-2.3.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp-api-2.3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp.jstl-1.2.0.v201105211821.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp.jstl-1.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.ws.rs-api-2.0.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-api-2.2.12.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-master/bin/../lib/jcip-annotations-1.0-1.jar:/usr/hdp/current/hbase-master/bin/../lib/jcodings-1.0.18.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-client-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-common-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-container-servlet-core-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-guava-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-media-jaxb-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-server-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jettison-1.3.8.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-http-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-io-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-jmx-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-jsp-9.2.26.v20180806.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-schemas-3.1.M0.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-security-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-server-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-servlet-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-util-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-util-ajax-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-webapp-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-xml-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/joni-2.1.11.jar:/usr/hdp/current/hbase-master/bin/../lib/jsch-0.1.54.jar:/usr/hdp/current/hbase-master/bin/../lib/json-io-2.5.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jsr311-api-1.1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/junit-4.12.jar:/usr/hdp/current/hbase-master/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-master/bin/../lib/libthrift-0.9.3.jar:/usr/hdp/current/hbase-master/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-master/bin/../lib/metrics-core-3.2.1.jar:/usr/hdp/current/hbase-master/bin/../lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-all-4.0.52.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-buffer-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-codec-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-codec-http-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-common-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-handler-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-resolver-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-transport-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/current/hbase-master/bin/../lib/objenesis-2.1.jar:/usr/hdp/current/hbase-master/bin/../lib/okhttp-2.7.5.jar:/usr/hdp/current/hbase-master/bin/../lib/okio-1.6.0.jar:/usr/hdp/current/hbase-master/bin/../lib/org.eclipse.jdt.core-3.8.2.v20130121.jar:/usr/hdp/current/hbase-master/bin/../lib/osgi-resource-locator-1.0.1.jar:/usr/hdp/current/hbase-master/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-master/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-master/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-hbase-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/re2j-1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/slf4j-api-1.7.25.jar:/usr/hdp/current/hbase-master/bin/../lib/snappy-java-1.0.5.jar:/usr/hdp/current/hbase-master/bin/../lib/spymemcached-2.12.2.jar:/usr/hdp/current/hbase-master/bin/../lib/stax2-api-3.1.4.jar:/usr/hdp/current/hbase-master/bin/../lib/validation-api-1.1.0.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/woodstox-core-5.0.3.jar:/usr/hdp/current/hbase-master/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/zookeeper.jar:/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/*:/usr/hdp/3.1.0.0-78/hadoop/.//*:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/*:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//*:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/*:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//*:/usr/hdp/3.1.0.0-78/tez/*:/usr/hdp/3.1.0.0-78/tez/lib/*:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/doc:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz:/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/*:/usr/hdp/3.1.0.0-78/hadoop/lib/*:/usr/hdp/3.1.0.0-78/zookeeper/*:
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:SERVER_GC_OPTS=-verbose:gc -XX:-PrintGCCause -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201904261039
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HADOOP_CONF=/usr/hdp/3.1.0.0-78/hadoop/conf
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/var/run/hbase/hbase-hbase-master.autostart
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HOSTNAME=bigdata-01.am.local
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:QTDIR=/usr/lib64/qt-3.3
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/1022
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/usr/hdp/current/hbase-master/bin/..
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:QTLIB=/usr/lib64/qt-3.3/lib
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:HOME=/home/hbase
2019-04-26 10:39:06,519 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2019-04-26 10:39:06,521 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.112-b15
2019-04-26 10:39:06,522 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Dhdp.version=3.1.0.0-78, -XX:+UseG1GC, -XX:MaxGCPauseMillis=100, -XX:-ResizePLAB, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:-PrintGCCause, -XX:+PrintAdaptiveSizePolicy, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201904261039, -Xmx1024m, -XX:ParallelGCThreads=8, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-master-bigdata-01.am.local.log, -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2019-04-26 10:39:06,843 INFO [main] impl.MetricsConfig: Loaded properties from hadoop-metrics2-hbase.properties
2019-04-26 10:39:07,222 INFO [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2019-04-26 10:39:07,226 INFO [main] timeline.HadoopTimelineMetricsSink: Identified hostname = bigdata-01.am.local, serviceName = hbase
2019-04-26 10:39:07,279 INFO [main] availability.MetricSinkWriteShardHostnameHashingStrategy: Calculated collector shard bigdata-03.am.local based on hostname: bigdata-01.am.local
2019-04-26 10:39:07,279 INFO [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://bigdata-03.am.local:6188/ws/v1/timeline/metrics
2019-04-26 10:39:07,279 INFO [main] timeline.HadoopTimelineMetricsSink: Container Metrics Uri: http://bigdata-03.am.local:6188/ws/v1/timeline/containermetrics
2019-04-26 10:39:07,293 INFO [main] impl.MetricsSinkAdapter: Sink timeline started
2019-04-26 10:39:07,413 INFO [main] impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2019-04-26 10:39:07,414 INFO [main] impl.MetricsSystemImpl: HBase metrics system started
2019-04-26 10:39:07,436 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2019-04-26 10:39:07,800 INFO [main] regionserver.RSRpcServices: master/bigdata-01:16000 server-side Connection retries=105
2019-04-26 10:39:07,825 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=5, maxQueueLength=500, handlerCount=50
2019-04-26 10:39:07,827 INFO [main] ipc.RpcExecutor: Instantiated priority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=500, handlerCount=20
2019-04-26 10:39:07,827 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=500, handlerCount=3
2019-04-26 10:39:07,828 INFO [main] ipc.PhoenixRpcSchedulerFactory: Using custom Phoenix Index RPC Handling with index rpc priority 1000 and metadata rpc priority 2000
2019-04-26 10:39:07,828 INFO [main] ipc.RpcExecutor: Instantiated Index.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=150, handlerCount=15
2019-04-26 10:39:07,828 INFO [main] ipc.RpcExecutor: Instantiated Metadata.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2019-04-26 10:39:07,947 INFO [main] ipc.RpcServerFactory: Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientService, hbase.pb.AdminService
2019-04-26 10:39:08,177 INFO [main] ipc.NettyRpcServer: Bind to /172.16.8.2:16000
2019-04-26 10:39:08,238 INFO [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled
2019-04-26 10:39:08,239 INFO [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled
2019-04-26 10:39:09,086 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2019-04-26 10:39:09,090 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2019-04-26 10:39:09,138 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-78--1, built on 12/06/2018 12:30 GMT
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=bigdata-01.am.local
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_112/jre
2019-04-26 10:39:09,144 INFO [main] zookeeper.ZooKeeper: Client environment:java.class.path=/usr/hdp/current/hbase-master/conf:/usr/jdk64/jdk1.8.0_112/lib/tools.jar:/usr/hdp/current/hbase-master/bin/..:/usr/hdp/current/hbase-master/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/aopalliance-repackaged-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/atlas-plugin-classloader-1.1.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/audience-annotations-0.5.0.jar:/usr/hdp/current/hbase-master/bin/../lib/avro-1.7.7.jar:/usr/hdp/current/hbase-master/bin/../lib/aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-beanutils-1.9.3.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-codec-1.10.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-collections-3.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-configuration2-2.1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-crypto-1.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-csv-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-io-2.5.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-lang3-3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-logging-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-math3-3.6.1.jar:/usr/hdp/current/hbase-master/bin/../lib/commons-net-3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-client-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-framework-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/curator-recipes-4.0.0.jar:/usr/hdp/current/hbase-master/bin/../lib/disruptor-3.3.6.jar:/usr/hdp/current/hbase-master/bin/../lib/dnsjava-2.1.7.jar:/usr/hdp/current/hbase-master/bin/../lib/ehcache-3.3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-master/bin/../lib/fst-2.50.jar:/usr/hdp/current/hbase-master/bin/../lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/current/hbase-master/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-master/bin/../lib/guava-11.0.2.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/guice-servlet-4.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hamcrest-core-1.3.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-annotations.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-backup-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-backup.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-bridge-shim-1.1.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-endpoint-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-endpoint.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-external-blockcache-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-external-blockcache.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-http-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-http.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-mapreduce.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-api-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics-api.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-metrics.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-procedure.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-shaded-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-protocol-shaded.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-replication-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-replication.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-resource-bundle.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rest.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-rsgroup.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-client-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-client.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-mapreduce-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-mapreduce.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-miscellaneous-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-netty-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shaded-protobuf-2.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark-it-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-spark.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-testing-util-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-testing-util.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper-2.0.2.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper-2.0.2.3.1.0.0-78-tests.jar:/usr/hdp/current/hbase-master/bin/../lib/hbase-zookeeper.jar:/usr/hdp/current/hbase-master/bin/../lib/HikariCP-java7-2.4.12.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-api-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-locator-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/hk2-utils-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/htrace-core-3.2.0-incubating.jar:/usr/hdp/current/hbase-master/bin/../lib/htrace-core4-4.2.0-incubating.jar:/usr/hdp/current/hbase-master/bin/../lib/httpclient-4.5.3.jar:/usr/hdp/current/hbase-master/bin/../lib/httpcore-4.4.6.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-annotations-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-databind-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-1.9.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-paranamer-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-module-scala_2.11-2.9.5.jar:/usr/hdp/current/hbase-master/bin/../lib/jackson-xc-1.9.2.jar:/usr/hdp/current/hbase-master/bin/../lib/jamon-runtime-2.4.1.jar:/usr/hdp/current/hbase-master/bin/../lib/javassist-3.20.0-GA.jar:/usr/hdp/current/hbase-master/bin/../lib/java-util-1.9.0.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.annotation-api-1.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.el-3.0.1-b08.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.inject-2.5.0-b32.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet-api-3.1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp-2.3.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp-api-2.3.1.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp.jstl-1.2.0.v201105211821.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.servlet.jsp.jstl-1.2.2.jar:/usr/hdp/current/hbase-master/bin/../lib/javax.ws.rs-api-2.0.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-api-2.2.12.jar:/usr/hdp/current/hbase-master/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-master/bin/../lib/jcip-annotations-1.0-1.jar:/usr/hdp/current/hbase-master/bin/../lib/jcodings-1.0.18.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-client-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-common-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-container-servlet-core-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-guava-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-media-jaxb-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jersey-server-2.25.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jettison-1.3.8.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-http-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-io-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-jmx-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-jsp-9.2.26.v20180806.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-schemas-3.1.M0.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-security-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-server-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-servlet-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-util-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-util-ajax-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-webapp-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/jetty-xml-9.3.25.v20180904.jar:/usr/hdp/current/hbase-master/bin/../lib/joni-2.1.11.jar:/usr/hdp/current/hbase-master/bin/../lib/jsch-0.1.54.jar:/usr/hdp/current/hbase-master/bin/../lib/json-io-2.5.1.jar:/usr/hdp/current/hbase-master/bin/../lib/jsr311-api-1.1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/junit-4.12.jar:/usr/hdp/current/hbase-master/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-master/bin/../lib/libthrift-0.9.3.jar:/usr/hdp/current/hbase-master/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-master/bin/../lib/metrics-core-3.2.1.jar:/usr/hdp/current/hbase-master/bin/../lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-all-4.0.52.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-buffer-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-codec-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-codec-http-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-common-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-handler-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-resolver-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/netty-transport-4.1.17.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/current/hbase-master/bin/../lib/objenesis-2.1.jar:/usr/hdp/current/hbase-master/bin/../lib/okhttp-2.7.5.jar:/usr/hdp/current/hbase-master/bin/../lib/okio-1.6.0.jar:/usr/hdp/current/hbase-master/bin/../lib/org.eclipse.jdt.core-3.8.2.v20130121.jar:/usr/hdp/current/hbase-master/bin/../lib/osgi-resource-locator-1.0.1.jar:/usr/hdp/current/hbase-master/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-master/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-master/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-hbase-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/current/hbase-master/bin/../lib/re2j-1.1.jar:/usr/hdp/current/hbase-master/bin/../lib/slf4j-api-1.7.25.jar:/usr/hdp/current/hbase-master/bin/../lib/snappy-java-1.0.5.jar:/usr/hdp/current/hbase-master/bin/../lib/spymemcached-2.12.2.jar:/usr/hdp/current/hbase-master/bin/../lib/stax2-api-3.1.4.jar:/usr/hdp/current/hbase-master/bin/../lib/validation-api-1.1.0.Final.jar:/usr/hdp/current/hbase-master/bin/../lib/woodstox-core-5.0.3.jar:/usr/hdp/current/hbase-master/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-master/bin/../lib/zookeeper.jar:/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/.//hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-simple-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/netty-all-4.0.52.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okhttp-2.7.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/okio-1.6.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-httpfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-native-client.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-rbf.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//kafka-clients-0.8.2.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aliyun-sdk-oss-2.8.3.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//aws-java-sdk-bundle-1.11.271.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-buffer-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-log4j-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//flogger-system-backend-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//lz4-1.2.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//google-extensions-0.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//ojalgo-43.0.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aliyun.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archive-logs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-aws.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-examples-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-codec-http-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-fs2img.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-kafka.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//jdom-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-mapreduce-client-uploader-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//hadoop-resourceestimator-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-common-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-handler-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-resolver-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//netty-transport-4.1.17.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//wildfly-openssl-1.0.4.Final.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/HikariCP-java7-2.4.12.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/dnsjava-2.1.7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/ehcache-3.3.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/fst-2.50.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/guice-servlet-4.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-base-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/java-util-1.9.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/jersey-guice-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/json-io-2.5.1.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/objenesis-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/snakeyaml-1.16.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/swagger-annotations-1.5.4.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-router.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-api.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-server-timeline-pluginstorage.jar:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//hadoop-yarn-services-core.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/conf:/usr/hdp/3.1.0.0-78/tez/conf_llap:/usr/hdp/3.1.0.0-78/tez/doc:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/hadoop-shim-2.8-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib:/usr/hdp/3.1.0.0-78/tez/man:/usr/hdp/3.1.0.0-78/tez/tez-api-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-common-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-dag-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-examples-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-history-parser-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-javadoc-tools-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-job-analyzer-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-mapreduce-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-protobuf-history-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-internals-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-runtime-library-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-tests-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/ui:/usr/hdp/3.1.0.0-78/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/tez/lib/gcs-connector-1.9.10.3.1.0.0-78-shaded.jar:/usr/hdp/3.1.0.0-78/tez/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-aws-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/lib/tez.tar.gz:/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/azure-data-lake-store-sdk-2.2.7.jar:/usr/hdp/3.1.0.0-78/hadoop/azure-keyvault-core-1.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/azure-storage-7.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-annotations-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-annotations.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-auth-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-auth.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-azure-datalake.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-azure.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-common-3.1.1.3.1.0.0-78-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-common-tests.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-common.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-kms-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-kms.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-nfs-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/hadoop-nfs.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-yarn-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-beanutils-1.9.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-io-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/stax2-api-3.1.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-crypto-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-identity-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/token-provider-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-recipes-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-config-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsch-0.1.54.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-api-2.2.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/paranamer-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jettison-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-servlet-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/zookeeper/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/zookeeper/zookeeper.jar:
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-957.5.1.el7.x86_64
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hbase
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2019-04-26 10:39:09,145 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2019-04-26 10:39:09,146 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1981d861
2019-04-26 10:39:09,164 INFO [main-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Opening socket connection to server bigdata-02.am.local/172.16.8.3:2181. Will not attempt to authenticate using SASL (unknown error)
2019-04-26 10:39:09,168 INFO [main-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.16.8.2:49326, server: bigdata-02.am.local/172.16.8.3:2181
2019-04-26 10:39:09,174 INFO [main-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Session establishment complete on server bigdata-02.am.local/172.16.8.3:2181, sessionid = 0x26a5781c8df004c, negotiated timeout = 60000
2019-04-26 10:39:09,245 INFO [main] util.log: Logging initialized @3480ms
2019-04-26 10:39:09,310 INFO [main] http.HttpRequestLog: Http request log for http.requests.master is not defined
2019-04-26 10:39:09,325 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2019-04-26 10:39:09,325 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2019-04-26 10:39:09,327 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2019-04-26 10:39:09,327 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-04-26 10:39:09,327 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-04-26 10:39:09,349 INFO [main] http.HttpServer: Jetty bound to port 16010
2019-04-26 10:39:09,351 INFO [main] server.Server: jetty-9.3.25.v20180904, build timestamp: 2018-09-05T04:11:46+07:00, git hash: 3ce520221d0240229c862b122d2b06c12a625732
2019-04-26 10:39:09,387 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@76332405{/logs,file:///var/log/hbase/,AVAILABLE}
2019-04-26 10:39:09,388 INFO [main] handler.ContextHandler: Started o.e.j.s.ServletContextHandler@d1d8e1a{/static,file:///usr/hdp/3.1.0.0-78/hbase/hbase-webapps/static/,AVAILABLE}
2019-04-26 10:39:09,507 INFO [main] handler.ContextHandler: Started o.e.j.w.WebAppContext@58015e56{/,file:///usr/hdp/3.1.0.0-78/hbase/hbase-webapps/master/,AVAILABLE}{file:/usr/hdp/3.1.0.0-78/hbase/hbase-webapps/master}
2019-04-26 10:39:09,511 INFO [main] server.AbstractConnector: Started ServerConnector@6f798482{HTTP/1.1,[http/1.1]}{0.0.0.0:16010}
2019-04-26 10:39:09,512 INFO [main] server.Server: Started @3747ms
2019-04-26 10:39:09,514 INFO [main] master.HMaster: hbase.rootdir=hdfs://bigdata-01.am.local:8020/apps/hbase/data, hbase.cluster.distributed=true
2019-04-26 10:39:09,533 INFO [Thread-16] master.HMaster: Adding backup master ZNode /hbase-unsecure/backup-masters/bigdata-01.am.local,16000,1556249946549
2019-04-26 10:39:09,586 INFO [Thread-16] master.ActiveMasterManager: Deleting ZNode for /hbase-unsecure/backup-masters/bigdata-01.am.local,16000,1556249946549 from backup master directory
2019-04-26 10:39:09,589 INFO [Thread-16] master.ActiveMasterManager: Registered as active master=bigdata-01.am.local,16000,1556249946549
2019-04-26 10:39:09,590 INFO [master/bigdata-01:16000] regionserver.HRegionServer: ClusterId : f1d6d95c-6016-457f-be68-e1ad3879de80
2019-04-26 10:39:09,592 INFO [Thread-16] regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk size 2 MB, max count 184, initial count 0
2019-04-26 10:39:09,594 INFO [Thread-16] regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 204, initial count 0
2019-04-26 10:39:09,736 ERROR [Thread-16] master.HMaster: Failed to become active master
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=READ, inode="/apps/hbase/data/hbase.version":root:hdfs:-rw-------
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:377)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1791)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1931)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:426)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:864)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:840)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1004)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:334)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:341)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:426)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:262)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:827)
at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2225)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:568)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=READ, inode="/apps/hbase/data/hbase.version":root:hdfs:-rw-------
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:377)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1791)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1931)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:426)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy18.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy19.getBlockLocations(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862)
... 17 more
2019-04-26 10:39:09,738 ERROR [Thread-16] master.HMaster: ***** ABORTING master bigdata-01.am.local,16000,1556249946549: Unhandled exception. Starting shutdown. *****
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=READ, inode="/apps/hbase/data/hbase.version":root:hdfs:-rw-------
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:377)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1791)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1931)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:426)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:864)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:840)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1004)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:334)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:341)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:426)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:262)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:827)
at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2225)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:568)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=READ, inode="/apps/hbase/data/hbase.version":root:hdfs:-rw-------
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:261)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:589)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:377)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1857)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1841)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1791)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:160)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1931)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:738)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:426)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy18.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy19.getBlockLocations(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy20.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862)
... 17 more
2019-04-26 10:39:09,740 INFO [Thread-16] regionserver.HRegionServer: ***** STOPPING region server 'bigdata-01.am.local,16000,1556249946549' *****
2019-04-26 10:39:09,740 INFO [Thread-16] regionserver.HRegionServer: STOPPED: Stopped by Thread-16
2019-04-26 10:39:09,763 INFO [master/bigdata-01:16000] zookeeper.ReadOnlyZKClient: Connect 0x18feb8ce to bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181 with session timeout=90000ms, retries 6, retry interval 1000ms, keepAlive=60000ms
2019-04-26 10:39:09,769 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce] zookeeper.ZooKeeper: Initiating client connection, connectString=bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$46/208718617@497254f3
2019-04-26 10:39:09,770 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Opening socket connection to server bigdata-02.am.local/172.16.8.3:2181. Will not attempt to authenticate using SASL (unknown error)
2019-04-26 10:39:09,770 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.16.8.2:49336, server: bigdata-02.am.local/172.16.8.3:2181
2019-04-26 10:39:09,772 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce-SendThread(bigdata-02.am.local:2181)] zookeeper.ClientCnxn: Session establishment complete on server bigdata-02.am.local/172.16.8.3:2181, sessionid = 0x26a5781c8df004d, negotiated timeout = 60000
2019-04-26 10:39:09,791 INFO [master/bigdata-01:16000] regionserver.HRegionServer: Stopping infoServer
2019-04-26 10:39:09,797 INFO [master/bigdata-01:16000] handler.ContextHandler: Stopped o.e.j.w.WebAppContext@58015e56{/,null,UNAVAILABLE}{file:/usr/hdp/3.1.0.0-78/hbase/hbase-webapps/master}
2019-04-26 10:39:09,801 INFO [master/bigdata-01:16000] server.AbstractConnector: Stopped ServerConnector@6f798482{HTTP/1.1,[http/1.1]}{0.0.0.0:16010}
2019-04-26 10:39:09,801 INFO [master/bigdata-01:16000] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@d1d8e1a{/static,file:///usr/hdp/3.1.0.0-78/hbase/hbase-webapps/static/,UNAVAILABLE}
2019-04-26 10:39:09,801 INFO [master/bigdata-01:16000] handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@76332405{/logs,file:///var/log/hbase/,UNAVAILABLE}
2019-04-26 10:39:09,802 INFO [master/bigdata-01:16000] regionserver.HRegionServer: stopping server bigdata-01.am.local,16000,1556249946549
2019-04-26 10:39:09,802 INFO [master/bigdata-01:16000] zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x18feb8ce to bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181
2019-04-26 10:39:09,805 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce] zookeeper.ZooKeeper: Session: 0x26a5781c8df004d closed
2019-04-26 10:39:09,805 INFO [ReadOnlyZKClient-bigdata-03.am.local:2181,bigdata-01.am.local:2181,bigdata-02.am.local:2181@0x18feb8ce-EventThread] zookeeper.ClientCnxn: EventThread shut down
2019-04-26 10:39:09,806 INFO [master/bigdata-01:16000] regionserver.HRegionServer: stopping server bigdata-01.am.local,16000,1556249946549; all regions closed.
2019-04-26 10:39:09,806 INFO [master/bigdata-01:16000] hbase.ChoreService: Chore service for: master/bigdata-01:16000 had [] on shutdown
2019-04-26 10:39:09,807 WARN [master/bigdata-01:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
2019-04-26 10:39:09,808 INFO [master/bigdata-01:16000] ipc.NettyRpcServer: Stopping server on /172.16.8.2:16000
2019-04-26 10:39:09,819 INFO [master/bigdata-01:16000] zookeeper.ZooKeeper: Session: 0x26a5781c8df004c closed
2019-04-26 10:39:09,819 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2019-04-26 10:39:09,819 INFO [master/bigdata-01:16000] regionserver.HRegionServer: Exiting; stopping=bigdata-01.am.local,16000,1556249946549; zookeeper connection closed. Please help me !
... View more
Labels: