Member since
10-01-2016
156
Posts
8
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8400 | 04-04-2019 09:41 PM | |
3185 | 06-04-2018 08:34 AM | |
1491 | 05-23-2018 01:03 PM | |
3002 | 05-21-2018 07:12 AM | |
1849 | 05-08-2018 10:48 AM |
07-09-2018
05:31 PM
Two days later. I have just tried to restart Atlas while HBase and Solr is already running. It has started. Thanks @sunile.manjee
... View more
07-07-2018
02:48 PM
Thanks @Vinicius Higa Murakami . I will check it when I get back work on Monday.
... View more
07-07-2018
04:04 AM
I have freshly imported Sandbox 2.6.4. When I try to start Atlas It gives me following error: No live SolrServers available to handle this request I have realized that Solr service was in Ambari Infra component. So I started Ambari Infra then tried again Atlas but got following error: resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::JavaIo::IOException: Can't get master address from ZooKeeper; znode data == null I tried again after starting HBase but the error turned into the following: resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2732)
at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:943)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:59924)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167) How can start Atlas?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Atlas
07-07-2018
03:58 AM
Thank you @Vinicius Higa Murakami . I don't use kerberos I think the issue doesn't related with keytab. As for the link I applied atlas.jaas.KafkaClient.option.renewTicket=false
atlas.jaas.KafkaClient.option.useTicketCache=false but didn't work. I couldn't apply Option 2: Login to ambari server and remove both parameters by running the below commands because it was sayin don't use configs.sh instead use configs.py But I couldn't either apply configs.py
... View more
07-06-2018
03:02 PM
I try to import a sql query from RDBMS to hive table. I used these query hundreds times but today I got strange error related Kafka and Atlas. 18/07/06 17:17:49 ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient]
org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ATLAS_HOOK-0 due to 30078 ms has passed since batch creation plus linger time
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:239)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:212)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:114)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:143)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:128)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:190)
at org.apache.atlas.sqoop.hook.SqoopHook.publish(SqoopHook.java:51)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:52)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:509)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ATLAS_HOOK-0 due to 30078 ms has passed since batch creation plus linger time
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:230)
... 17 more
Despite this error sometimes sqoop works, sometimes doesn't. When doesn't, it creates table but empty.
... View more
Labels:
- Labels:
-
Apache Sqoop
06-20-2018
11:52 AM
Hi guys same problem that I had. I tried many things. Finally I changed my yarn llap queue max capacity from %50 to %100 and then Hive2Interactive Server successfully started. Possible cause in my case: allocated containers exceeded llap queue max capacity.
... View more
06-14-2018
06:07 AM
Hi @dbains Kafka version is 0.10.1 # Generated by Apache Ambari. Wed Jun 13 11:53:45 2018
advertised.listeners=PLAINTEXT://hadooptest01.datalonga.com:6667
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.rack=/default-rack
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=true
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec,kafka.server.KafkaServer.ClusterId
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.hosts=hadooptest03.datalonga.com
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.protocol=http
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=******
kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks
kafka.timeline.metrics.truststore.type=jks
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://0.0.0.0:6667
log.cleanup.interval.mins=10
log.dirs=/data/01/kafka/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
port=6667
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=hadooptest03.datalonga.com:2181,hadooptest02.datalonga.com:2181,hadooptest01.datalonga.com:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
... View more
06-13-2018
07:26 AM
I created a topic named erkan_deneme With the foolowing commanda I send some messsages to my topic: /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list hadooptest01.datalonga.com:6667 --topic erkan_deneme I tried to receive messages from topic erkan_deneme /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server hadooptest01.datalonga.com:6667 --topic erkan_deneme --from-beginning I couldn't get any messages. No warnings, no errors, no errors in kafka logs. I was gonna mad but I realized that I did not created any Ranger Policy on erkan_deneme topic. Then I created Ranger policy on that topic and user. But It didn't work either. Then I tried following command: /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper hadooptest01.datalonga.com:2181 --topic erkan_deneme --from-beginning I received messages. The question is; Why I had to use zookeeper instead of bootstrap-server for kafka-console-consumer although deprecation warning not to use zookeeper. It is so wierd not working with bootstrap-server.
... View more
Labels:
- Labels:
-
Apache Kafka
06-12-2018
02:01 PM
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. @dbains you should update
... View more