Member since
06-14-2016
69
Posts
28
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7023 | 07-30-2018 06:45 PM | |
5541 | 06-22-2018 10:28 PM | |
1428 | 06-20-2018 04:29 AM | |
1412 | 06-20-2018 04:24 AM | |
2898 | 06-15-2018 08:24 PM |
06-25-2017
09:41 PM
@Karan Alang You may also need to check what 'unclean.leader.election.enable' is set to. For example if you have 3 replicas in ISR and you kill the leader, new leader will be elected from only those replicas which are in sync if unclean.leader.election.enable is set to false. If unclean.leader.election.enable is set to true, then replicas not in ISR can also be elected as leader as a last resort for high availability which may result into data loss. By default this property is set to true. So if you do not want data loss, it is recommended to set unclean.leader.election.enable=false. For more details: http://kafka.apache.org/documentation.html#design_uncleanleader Thank you!
... View more
03-26-2017
09:18 PM
SYMPTOM: While integrating Storm 1.0.1 to Elastic Search 5.0.0, the following error is observed: Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/base/Preconditions
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:62)
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:49)
at com.mz.pipeline.StreamToES5_1.main(StreamToES5_1.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.base.Preconditions
Following were the maven dependencies: <dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-elasticsearch</artifactId>
<version>1.0.2</version>
</dependency>
ROOT CAUSE: Apache libraries were used to compile/package the topology instead of using Hortonworks repository
RESOLUTION: Please add the Hortonworks repository in the pom.xml:
<repositories>
<repository>
<id>hortonworks</id>
<url>http://repo.hortonworks.com/content/groups/public/</url>
</repository>
</repositories>
Then change the storm artifact versions, which should be in this format: <apache_version>.<HDP-version>
For example, for 'storm-core' in HDP 2.5.0.0, version would be 1.0.1.2.5.0.0-1245. Similarly, for 'storm-elasticsearch' it would be: 1.0.1.2.5.0.0-1245 (for HDP 2.5.0.0). Please find the version corresponding to your HDP here: http://repo.hortonworks.com/content/groups/public/org/apache/storm/
... View more
Labels:
03-24-2017
06:13 PM
ROOT CAUSE:kafka.metrics.reporters in Advanced Kafka-broker was pointing to Ganglia metrics reporter. kafka.metrics.reporters=kafka.ganglia.KafkaGangliaMetricsReporter RESOLUTION:
Kafka.metrics.reporters is pointing to GangliaMetrics. It should be pointing to Ambari metrics like this:
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
You can modify the property via Ambari -> Kafka -> configs -> Advanced kafka-broker-> kafka.metrics.reporters and modify the
property. Please save the changes and restart required services
... View more
Labels:
03-24-2017
06:06 PM
1 Kudo
SYMPTOM: Storm nimbus fails to come up after Ambari was upgraded to 2.4 version while on HDP 2.4 2016-11-22 13:29:43.066 [timer] b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:961) ~[clojure-1.6.0.jar:?]
at clojure.lang.Numbers.isZero(Numbers.java:90) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$partition_fixed.invoke(util.clj:892) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:626) ~[clojure-1.6.0.jar:?]
at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val$iter__366__370$fn__371.invoke(util.clj:301) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] ROOT CAUSE: It is a known issue: https://hortonworks.jira.com/browse/BUG-66735
RESOLUTION: It has been fixed in HDP 2.5.3, please refer section 'Upgrade' here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/fixed_issues.html
WORKAROUND: This can be resolved by following below steps as a workaround: 1. Deactivate all running topologies.
2. Stop Storm service
3. Delete all states under zookeeper: -> /usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port) -> rmr /storm
4. Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: rm -rf <value of stormlocal.dir>
5. Start storm service.
... View more
Labels:
03-24-2017
05:59 PM
1 Kudo
We can configure multiple listeners by giving comma-separated list of URIs that Kafka will listen on. Please follow the steps below to implement this: 1. Add the listeners as comma separated value in Ambari ->kafka->configs->listeners, for example: listeners=PLAINTEXT://myhost:6667, PLAINTEXTSASL://myhost:6668 2. Add ACL for 'Anonymous' user, because In PLAINTEXT connections user's identity is set to Anonymous. For example : $ bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=ambari-server:2181 --add --allow-principal User:Anonymous --producer --topic topic-oct 3. Run the producer with security protocol set to PLAINTEXT to listen to PLAINTEXT and set it to PLAINTEXTSASL to listen to other listener, something like this: $ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6667 --topic topic-oct --security-protocol PLAINTEXT
$ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6668 --topic topic-oct --security-protocol PLAINTEXTSASL
Kindly replace broker hostname:port, zookeeper hostname:port and topic names according to the values configured in your cluster. Note- This is only supported in HDP 2.3.4+. It is not available in prior versions.
... View more
Labels:
03-24-2017
05:51 PM
1 Kudo
PROBLEM: Enable GC logging for Zookeeper
SOLUTION: When Using Ambari web UI:
1. Click on the Zookeeper Service
2. Click on Configs tab
3. Navigate to 'Advanced Zookeeper-env' 4. Locate the setting 'zookeeper-env template' 5. Append the following to 'export SERVER_JVMFLAGS=-Xmx1024m' :
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOO_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
To be precise it should look like : export SERVER_JVMFLAGS="-Xmx1024m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOOKEEPER_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
6. Save the changes and restart the Zookeeper service when prompted
When the cluster is managed outside Ambari:
1. On application cluster node open zookeeper-env.sh
2. Append the above mentioned parameter to SERVER_JVMFLAGS values. You can find the zookeeper-env.sh at /etc/zookeeper/conf/
... View more
Labels:
03-24-2017
05:41 PM
SYMPTOMS: When using the SelectHiveQL processor in Nifi to run hive queries, it does not use the specific yarn queue ('nifi') if it is configured as in the following HiveConnectionPool settings: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;tez.queue.name=nifi
Following is an example of queue setting in yarn capacity scheduler: yarn.scheduler.capacity.root.queues=default,nifi
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.capacity=40
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.nifi.acl_administer_queue=*
yarn.scheduler.capacity.root.nifi.acl_submit_applications=*
yarn.scheduler.capacity.root.nifi.capacity=60
yarn.scheduler.capacity.root.nifi.maximum-capacity=60
yarn.scheduler.capacity.root.nifi.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.nifi.ordering-policy=fifo
yarn.scheduler.capacity.root.nifi.state=RUNNING
yarn.scheduler.capacity.root.nifi.user-limit-factor=1 RESOLUTION: Please add tez.queue.name with a question mark like this: ?tez.queue.name=<queue_name> For example: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;?tez.queue.name=nifi
... View more
Labels:
03-24-2017
05:23 PM
ISSUE: While trying to run ConsumeKafka process to consume messages from secure Kafka, it throws following error: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:702) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:557) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:540) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumerPool.createKafkaConsumer(ConsumerPool.java:136) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumerPool.obtainConsumer(ConsumerPool.java:106) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_0_10.onTrigger(ConsumeKafka_0_10.java:285) ~[na:na]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Jaas configuration not found
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86) ~[na:na]
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70) ~[na:na]
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:623) ~[na:na]
... 17 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: Jaas configuration not found
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:299) ~[na:na]
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:103) ~[na:na]
at org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:45) ~[na:na]
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68) ~[na:na]
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78) ~[na:na]
... 20 common frames omitted
Caused by: java.io.IOException: Could not find a 'KafkaClient' entry in this configuration.
at org.apache.kafka.common.security.JaasUtils.jaasConfig(JaasUtils.java:50) ~[na:na]
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297) ~[na:na]
... 24 common frames omitted
The Security Protocol is set to SASL_PLAINTEXT and Kerberos Service Name as Kafka in ConsumeKafka properties. ROOT CAUSE: The JAAS configuration is missing in conf/bootstrap.conf RESOLUTION: When Kafka is secure and Security Protocol is set to SASL_PLAINTEXT in ConsumeKafka processor configuration, There are two factors that needs to be considered: 1. The Kerberos Service Name must be provided, for example, 'Kafka'
2. The JAAS configuration file must be set in conf/bootstrap.conf with something like the following (example): java.arg.15=-Djava.security.auth.login.config=/path/to/jass-client.config
... View more
Labels:
01-26-2017
08:42 AM
@yjiang did it work, when created as kafka user?
... View more
01-23-2017
06:21 PM
@yjiang Yes, please try creating the topic as kafka user because when we use kafka-topics.sh to create a test topic, what this script does is to create a node in zookeeper path - /broker/topics/test then brokers thread gets notified that a new node is created and broker then creates actual data for topic test that is the metadata and physical data.
But notice that Brokers are kafka/host@REALM, so if a user other than kafka creates a topic it gets permission for example world:anyone:r
sasl:xyz:crdwa So the new node that is created in zookeeper path will have these permissions. Now when the broker gets alerted and tries to create metadata and physical data for this new topic, it wont be able to because broker principal is kafka but topic's is xyz
... View more
- « Previous
- Next »