Member since
06-14-2016
69
Posts
28
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5717 | 07-30-2018 06:45 PM | |
3720 | 06-22-2018 10:28 PM | |
840 | 06-20-2018 04:29 AM | |
776 | 06-20-2018 04:24 AM | |
1891 | 06-15-2018 08:24 PM |
07-25-2017
09:23 AM
1 Kudo
@R Patel
Security Protocol: PLAINTEXT , looks like your Kafka cluster is not secure, could you please confirm? Could you please tell me your Kafka version as well. Thanks!
... View more
07-24-2017
07:25 PM
@R Patel Could you please attach the broker server log covering the timestamp when you saw this error. Also what is the current value for linger time and request.timeout.ms? Also please provide properties for PublishKafka processor. Are you seeing same issue if you publish to any other Kafka topic? Thanks!
... View more
07-20-2017
07:52 AM
@Anil Reddy It looks like large processing time between poll calls ( when processor processes large volume of data) can exceed session.timeout.ms and cause group rebalancing. One way you can try is increasing the group.max.session.timeout.ms on the brokers side and add increased value of request.timeout.ms and session.timeout.ms in the ConsumeKafka configs. Please let me know if it helps. Thank you!
... View more
07-14-2017
10:09 AM
@Naveen R The Consumer Lag graph under Fetcher Lag Metrics is actually lag for replica fetcher not the consumer lag per consumer group, as you can see the metric too for example is something like: kafka.server.FetcherLagMetrics.ConsumerLag.clientId.ReplicaFetcherThread-0-1001.partition.0.topic.* As for now we do not have specific graph in Grafana for consumer lag per consumer group. Thank you.
... View more
07-12-2017
07:24 PM
@Zhao Chaofeng So the deletion might not go through in the following scenarios: 1. broker hosting one of the replicas for that topic goes down 2. partition reassignment for partitions of that topic is in progress 3. preferred replica election for partitions of that topic is in progress Kindly check if your brokers are up and running fine. Also you can check state-change logs to monitor the state of the topic after you ran delete command. If you can share the state change logs here as well that would be helpful. Also the last option would be to delete it manually but I would suggest to check the cause first. Thank you!
... View more
07-10-2017
11:01 AM
@Dhiraj Sardana Could you please share your storm UI screenshot as well capturing the spout and bolts metrics. Thank you.
... View more
07-10-2017
10:57 AM
@Zhao Chaofeng When you delete a topic, it is marked for deletion and it eventually gets deleted. In the screenshot that you provided, it is working as design. The 'Note' that you see in the delete command output is a general reminder to set delete.enable.topic=true. If it is already set to true you can ignore that 'Note' From the screenshot it looks like deletion is working fine because after deleting, I do not see that topic when you list topics, only when you are creating the topic again shows it in the list. May I know how are you verifying that topic is not getting deleted?
... View more
06-29-2017
07:35 AM
4 Kudos
In a non-Kerberized environment, for Kafka console consumer command displays all messages, perform the following: Note: Here, <broker_host> is Kafka broker hostname, <zk_host> is zookeeper hostname and Kafka port will be 6667:
Using Old consumer API (prior to HDP 2.5.x), run the following: bin/kafka-console-consumer.sh --zookeeper <zk_host>:2181 --topic test --from-beginning
Using New consumer API ( From HDP 2.5.x onwards), run the following: bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:6667 --topic test --from-beginnin
To view a specific number of message in a Kafka topic, use the --max-messages option. To view the oldest message, run the console consumer with --from-beginning and --max-messages 1:
Using Old consumer API ( Prior to HDP 2.5.x): bin/kafka-console-consumer.sh --zookeeper <zk_host>:2181 --topic test --from-beginning --max-messages 1
Using New consumer API ( From HDP 2.5.x onwards): bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:6667 --topic test --from-beginning --max-messages
... View more
- Find more articles tagged with:
- How-ToTutorial
- Kafka
- Sandbox & Learning
Labels:
06-29-2017
07:31 AM
3 Kudos
Problem: While implementing Auto-Hdfs, following errors were thrown in Nimbus log: Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, URL: https://da0gdal202.match.corp:9393/kms/v1/?op=GETDELEGATIONTOKEN&doAs=gdsreader&renewer=hdfs-hdpprod%40MATCH.CORP&user.name=hdfs, status: 403, message: Forbidden
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:278)
at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:212)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:170)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1024)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1019)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
Cause: 1. No symlink of Ranger KMS conf to core-site and hdfs-site 2. Missing 'kms.proxyuser.hdfs.groups' and 'hadoop.kms.proxyuser.hdfs.hosts' in Kms-site.xml Solution: 1. Created symlink of ranger kms conf to core site and hdfs site
2. Added following properties in Kms-site.xml: <property>
<name>hadoop.kms.proxyuser.hdfs.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.hdfs.hosts</name>
<value>*</value>
</property>
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- HDFS
- Issue Resolution
- issue-resolution
- Storm
Labels:
06-29-2017
07:25 AM
3 Kudos
Problem: In HDP 2.3.4 and Ambari 2.4.2, Storm nimbus was not coming up: 2017-05-30 14:35:18.756 o.a.t.s.TThreadPoolServer [ERROR] Error occurred during processing of message.
java.lang.RuntimeException: No nimbus leader participant host found, have you started your nimbus hosts?? Following is the error from Nimbus log: 2017-05-30 14:35:18.750 b.s.d.nimbus [WARN] principal: storm@EXAMPLE.COM is trying to impersonate principal: ambari-server@EXAMPLE.COM
2017-05-30 14:35:18.751 b.s.d.nimbus [WARN] impersonation attempt but nimbus.impersonation.authorizer has no authorizer configured. potential
security risk, please see SECURITY.MD to learn how to configure impersonation authorizer.
2017-05-30 14:35:18.756 o.a.t.s.TThreadPoolServer [ERROR] Error occurred during processing of message.
java.lang.RuntimeException: No nimbus leader participant host found, have you started your nimbus hosts?
Cause: Property ''nimbus.impersonation.authorizer" was set to 'org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer'. Prior to HDP 2.5, the storm package was backtype instead of org.apache. Solution: Modify the property as follows: nimbus.impersonation.authorizer=backtype.storm.security.auth.authorizer.ImpersonationAuthorizer
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- impersonation
- Issue Resolution
- nimbus
- Storm
Labels:
06-29-2017
07:06 AM
3 Kudos
Problem Description: Unable to start storm nimbus from Ambari, throwing following error: raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh ln -s /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-*.jar /usr/hdp/current/storm-nimbus/lib/ambari-metrics-storm-sink.jar' returned 1. ln: target `/usr/hdp/current/storm-nimbus/lib/ambari-metrics-storm-sink.jar' is not a directory Cause: Two legacy jars under storm lib: # ls -ld /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common*
-rw-r--r-- 1 root root 1937430 May 11 15:57 /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-2.4.1.0.22.jar
-rw-r--r-- 1 root root 1933392 Nov 23 2016 /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-2.4.2.0.136.jar Solution: As the Ambari version was 2.4.2 so, moved ambari-metrics-storm-sink-legacy-with-common-2.4.1.0.22.jar to a different location and started storm nimbus.
... View more
- Find more articles tagged with:
- ambari-metrics
- Data Ingestion & Streaming
- Issue Resolution
- nimbus
- Storm
Labels:
06-29-2017
06:51 AM
1 Kudo
Following was the error in supervisor log: 2017-06-09 14:40:18.348 o.a.s.d.supervisor [ERROR] Error on initialization of server mk-supervisor
java.lang.RuntimeException: java.lang.ClassNotFoundException: backtype.storm.generated.LSSupervisorId
at org.apache.storm.utils.LocalState.deserialize(LocalState.java:83)
at org.apache.storm.utils.LocalState.get(LocalState.java:130)
at org.apache.storm.local_state$ls_supervisor_id.invoke(local_state.clj:61)
at org.apache.storm.daemon.supervisor$standalone_supervisor$reify__7977.prepare(supervisor.clj:1216)
at org.apache.storm.daemon.supervisor$fn__7833$exec_fn__3537__auto____7834.invoke(supervisor.clj:766)
at clojure.lang.AFn.applyToHelper(AFn.java:160)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:630)
at org.apache.storm.daemon.supervisor$fn__7833$mk_supervisor__7878.doInvoke(supervisor.clj:764)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at org.apache.storm.daemon.supervisor$_launch.invoke(supervisor.clj:1204)
at org.apache.storm.daemon.supervisor$_main.invoke(supervisor.clj:1237)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at org.apache.storm.daemon.supervisor.main(Unknown Source)
Caused by: java.lang.ClassNotFoundException: backtype.storm.generated.LSSupervisorId
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.storm.utils.LocalState.deserialize(LocalState.java:78)
... 14 more
2017-06-09 14:40:18.351 o.a.s.util [ERROR] Halting process: ("Error on initialization")
java.lang.RuntimeException: ("Error on initialization")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at org.apache.storm.daemon.supervisor$fn__7833$mk_supervisor__7878.doInvoke(supervisor.clj:764)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at org.apache.storm.daemon.supervisor$_launch.invoke(supervisor.clj:1204)
at org.apache.storm.daemon.supervisor$_main.invoke(supervisor.clj:1237)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at org.apache.storm.daemon.supervisor.main(Unknown Source)
Cause: 1. Few storm configs were still configured with backtype packages 2. Stale states in zookeeper and local data Solution: 1. Filtered in the configs and changed all the backtype packages to org.apache as Storm from HDP 2.5 onwards uses org.apache packages.
2. Follow below steps to clear stale states and local data:
-> Deactivate all running topologies.
-> Stop Storm service
-> Delete all states under zookeeper: $/usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port)
> rmr /storm -> Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: $ rm -rf <value of stormlocal.dir> -> Start storm service.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- Issue Resolution
- Storm
Labels:
06-28-2017
10:27 PM
2 Kudos
Problem Description: On deleting few kafka topics and creating same topics again there Not Leader for this partition exception while Producing/Consuming and was reading old meta data. Getting Below error: 17/03/10 10:45:35 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set([akrnohij-wng-fp9,47], [akrnohij-wng-fp4,224], [akrnohij-wng-fp11,172], [akrnohij-wng-fp1,84], [akrnohij-wng-fp10,117], [akrnohij-wng-fp10,168], [akrnohij-wng-fp4,176], [akrnohij-wng-fp1,136], [akrnohij-wng-fp9,167], [akrnohij-wng-fp2,174], [akrnohij-wng-fp11,74], [akrnohij-wng-fp3,189], [akrnohij-wng-fp11,200], [akrnohij-wng-fp11,168], [akrnohij-wng-fp4,149], [akrnohij-wng-fp7,127], [akrnohij-wng-fp6,39], [akrnohij-wng-fp10,133], [akrnohij-wng-fp9,171], [akrnohij-wng-fp5,175], [akrnohij-wng-fp7,181] Cause: The zookeeper ACL for state znode was not correct : getAcl /brokers/topics/akrnohij-wng-fp4/partitions/135/state
‘auth,’
: cdrwa,
‘world,’anyone
:r The following property was set in zookeeper env : -Dzookeeper.skipACL=yes Solution:
Deleted the topic manually as deletion was stuck Removed the property -Dzookeeper.skipACL=yes Restart zookeeper and Kafka services NOTE- It is dangerous to use -Dzookeeper.skipACL=yes property and instead it is recommended to use kafka service principal if in case you need to delete the znode for kafka topics.
... View more
- Find more articles tagged with:
- Cloud & Operations
- FAQ
- Issue Resolution
- Kafka
- zookeeper-acl
Labels:
06-27-2017
06:44 AM
2 Kudos
PROBLEM: While implementing AutoHDFS for storm-hdfs integration, observed following error: 2017-05-19 11:21:44.865 o.a.s.h.c.s.AutoHDFS [ERROR] Could not populate HDFS credentials.
java.lang.RuntimeException: Failed to get delegation tokens.
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:242)
at org.apache.storm.hdfs.common.security.AutoHDFS.populateCredentials(AutoHDFS.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__11226.submitTopologyWithOpts(nimbus.clj:1544)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2940)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2924)
at org.apache.storm.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.storm.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.storm.security.auth.SaslTransportPlugin$TUGIWrapProcessor.process(SaslTransportPlugin.java:138)
at org.apache.storm.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2214)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2746)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2759)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.storm.hdfs.common.security.AutoHDFS$1.run(AutoHDFS.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:213)
... 17 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2120)
CAUSE: Missing Hadoop dependencies in pom.xml caused the following exception: Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
SOLUTION: To resolve this issue, added the following dependency in pom.xml: <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- HDFS
- Issue Resolution
- Storm
Labels:
06-25-2017
09:49 PM
1 Kudo
@Tarun Kumar
Please add the option --security-protocol SASL_PLAINTEXT to your producer command (depending upon the security protocol that you have configured) and also instead of localhost, use broker hostname: $ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker_hostname>:6667 --topic test --security-protocol SASL_PLAINTEXT
Thank you!
... View more
06-25-2017
09:41 PM
@Karan Alang You may also need to check what 'unclean.leader.election.enable' is set to. For example if you have 3 replicas in ISR and you kill the leader, new leader will be elected from only those replicas which are in sync if unclean.leader.election.enable is set to false. If unclean.leader.election.enable is set to true, then replicas not in ISR can also be elected as leader as a last resort for high availability which may result into data loss. By default this property is set to true. So if you do not want data loss, it is recommended to set unclean.leader.election.enable=false. For more details: http://kafka.apache.org/documentation.html#design_uncleanleader Thank you!
... View more
06-25-2017
09:27 PM
@heta desai Please refer this doc for detailed understanding of storm topology lifecycle:http://storm.apache.org/releases/1.0.3/Lifecycle-of-a-topology.html Also, you can set the number of worker processes, executors and tasks. Please refer this doc for understanding storm parallelism: http://storm.apache.org/releases/1.0.1/Understanding-the-parallelism-of-a-Storm-topology.html Hope these links will help!
... View more
05-15-2017
11:15 PM
@Naveen Keshava Could you please provide your pom.xml file as well. Thanks!
... View more
03-26-2017
09:18 PM
SYMPTOM: While integrating Storm 1.0.1 to Elastic Search 5.0.0, the following error is observed: Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/base/Preconditions
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:62)
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:49)
at com.mz.pipeline.StreamToES5_1.main(StreamToES5_1.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.base.Preconditions
Following were the maven dependencies: <dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-elasticsearch</artifactId>
<version>1.0.2</version>
</dependency>
ROOT CAUSE: Apache libraries were used to compile/package the topology instead of using Hortonworks repository
RESOLUTION: Please add the Hortonworks repository in the pom.xml:
<repositories>
<repository>
<id>hortonworks</id>
<url>http://repo.hortonworks.com/content/groups/public/</url>
</repository>
</repositories>
Then change the storm artifact versions, which should be in this format: <apache_version>.<HDP-version>
For example, for 'storm-core' in HDP 2.5.0.0, version would be 1.0.1.2.5.0.0-1245. Similarly, for 'storm-elasticsearch' it would be: 1.0.1.2.5.0.0-1245 (for HDP 2.5.0.0). Please find the version corresponding to your HDP here: http://repo.hortonworks.com/content/groups/public/org/apache/storm/
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- ElasticSearch
- Issue Resolution
- Storm
Labels:
03-24-2017
06:13 PM
ROOT CAUSE:kafka.metrics.reporters in Advanced Kafka-broker was pointing to Ganglia metrics reporter. kafka.metrics.reporters=kafka.ganglia.KafkaGangliaMetricsReporter RESOLUTION:
Kafka.metrics.reporters is pointing to GangliaMetrics. It should be pointing to Ambari metrics like this:
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
You can modify the property via Ambari -> Kafka -> configs -> Advanced kafka-broker-> kafka.metrics.reporters and modify the
property. Please save the changes and restart required services
... View more
- Find more articles tagged with:
- Issue Resolution
- Kafka
Labels:
03-24-2017
06:06 PM
1 Kudo
SYMPTOM: Storm nimbus fails to come up after Ambari was upgraded to 2.4 version while on HDP 2.4 2016-11-22 13:29:43.066 [timer] b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:961) ~[clojure-1.6.0.jar:?]
at clojure.lang.Numbers.isZero(Numbers.java:90) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$partition_fixed.invoke(util.clj:892) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:626) ~[clojure-1.6.0.jar:?]
at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val$iter__366__370$fn__371.invoke(util.clj:301) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] ROOT CAUSE: It is a known issue: https://hortonworks.jira.com/browse/BUG-66735
RESOLUTION: It has been fixed in HDP 2.5.3, please refer section 'Upgrade' here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/fixed_issues.html
WORKAROUND: This can be resolved by following below steps as a workaround: 1. Deactivate all running topologies.
2. Stop Storm service
3. Delete all states under zookeeper: -> /usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port) -> rmr /storm
4. Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: rm -rf <value of stormlocal.dir>
5. Start storm service.
... View more
- Find more articles tagged with:
- Ambari
- Issue Resolution
- issue-resolution
- nimbus
- Storm
- upgrade
Labels:
03-24-2017
05:59 PM
1 Kudo
We can configure multiple listeners by giving comma-separated list of URIs that Kafka will listen on. Please follow the steps below to implement this: 1. Add the listeners as comma separated value in Ambari ->kafka->configs->listeners, for example: listeners=PLAINTEXT://myhost:6667, PLAINTEXTSASL://myhost:6668 2. Add ACL for 'Anonymous' user, because In PLAINTEXT connections user's identity is set to Anonymous. For example : $ bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=ambari-server:2181 --add --allow-principal User:Anonymous --producer --topic topic-oct 3. Run the producer with security protocol set to PLAINTEXT to listen to PLAINTEXT and set it to PLAINTEXTSASL to listen to other listener, something like this: $ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6667 --topic topic-oct --security-protocol PLAINTEXT
$ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6668 --topic topic-oct --security-protocol PLAINTEXTSASL
Kindly replace broker hostname:port, zookeeper hostname:port and topic names according to the values configured in your cluster. Note- This is only supported in HDP 2.3.4+. It is not available in prior versions.
... View more
- Find more articles tagged with:
- How-ToTutorial
- Kafka
- Sandbox & Learning
- sasl
Labels:
03-24-2017
05:51 PM
1 Kudo
PROBLEM: Enable GC logging for Zookeeper
SOLUTION: When Using Ambari web UI:
1. Click on the Zookeeper Service
2. Click on Configs tab
3. Navigate to 'Advanced Zookeeper-env' 4. Locate the setting 'zookeeper-env template' 5. Append the following to 'export SERVER_JVMFLAGS=-Xmx1024m' :
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOO_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
To be precise it should look like : export SERVER_JVMFLAGS="-Xmx1024m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOOKEEPER_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
6. Save the changes and restart the Zookeeper service when prompted
When the cluster is managed outside Ambari:
1. On application cluster node open zookeeper-env.sh
2. Append the above mentioned parameter to SERVER_JVMFLAGS values. You can find the zookeeper-env.sh at /etc/zookeeper/conf/
... View more
- Find more articles tagged with:
- Cloud & Operations
- gce
- How-ToTutorial
- Zookeeper
Labels:
03-24-2017
05:41 PM
SYMPTOMS: When using the SelectHiveQL processor in Nifi to run hive queries, it does not use the specific yarn queue ('nifi') if it is configured as in the following HiveConnectionPool settings: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;tez.queue.name=nifi
Following is an example of queue setting in yarn capacity scheduler: yarn.scheduler.capacity.root.queues=default,nifi
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.capacity=40
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.nifi.acl_administer_queue=*
yarn.scheduler.capacity.root.nifi.acl_submit_applications=*
yarn.scheduler.capacity.root.nifi.capacity=60
yarn.scheduler.capacity.root.nifi.maximum-capacity=60
yarn.scheduler.capacity.root.nifi.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.nifi.ordering-policy=fifo
yarn.scheduler.capacity.root.nifi.state=RUNNING
yarn.scheduler.capacity.root.nifi.user-limit-factor=1 RESOLUTION: Please add tez.queue.name with a question mark like this: ?tez.queue.name=<queue_name> For example: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;?tez.queue.name=nifi
... View more
- Find more articles tagged with:
- Data Processing
- Issue Resolution
- NiFi
- nifi-hive
- nifi-processor
- tez
Labels:
03-24-2017
05:23 PM
ISSUE: While trying to run ConsumeKafka process to consume messages from secure Kafka, it throws following error: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:702) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:557) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:540) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumerPool.createKafkaConsumer(ConsumerPool.java:136) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumerPool.obtainConsumer(ConsumerPool.java:106) ~[na:na]
at org.apache.nifi.processors.kafka.pubsub.ConsumeKafka_0_10.onTrigger(ConsumeKafka_0_10.java:285) ~[na:na]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.2.1.1.0-2.jar:1.1.0.2.1.1.0-2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_121]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Jaas configuration not found
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86) ~[na:na]
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:70) ~[na:na]
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83) ~[na:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:623) ~[na:na]
... 17 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: Jaas configuration not found
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:299) ~[na:na]
at org.apache.kafka.common.security.kerberos.KerberosLogin.configure(KerberosLogin.java:103) ~[na:na]
at org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:45) ~[na:na]
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68) ~[na:na]
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78) ~[na:na]
... 20 common frames omitted
Caused by: java.io.IOException: Could not find a 'KafkaClient' entry in this configuration.
at org.apache.kafka.common.security.JaasUtils.jaasConfig(JaasUtils.java:50) ~[na:na]
at org.apache.kafka.common.security.kerberos.KerberosLogin.getServiceName(KerberosLogin.java:297) ~[na:na]
... 24 common frames omitted
The Security Protocol is set to SASL_PLAINTEXT and Kerberos Service Name as Kafka in ConsumeKafka properties. ROOT CAUSE: The JAAS configuration is missing in conf/bootstrap.conf RESOLUTION: When Kafka is secure and Security Protocol is set to SASL_PLAINTEXT in ConsumeKafka processor configuration, There are two factors that needs to be considered: 1. The Kerberos Service Name must be provided, for example, 'Kafka'
2. The JAAS configuration file must be set in conf/bootstrap.conf with something like the following (example): java.arg.15=-Djava.security.auth.login.config=/path/to/jass-client.config
... View more
- Find more articles tagged with:
- consumekafka
- Data Ingestion & Streaming
- Issue Resolution
- jaas
- Kafka
- Kerberos
- nifi-processor
Labels:
02-12-2017
08:10 PM
@pp z Did the information help?
... View more
02-10-2017
10:26 AM
1 Kudo
@pp z Hi, could you please make sure your kafka_client_jaas.conf is configured properly? For example: Kafka client configuration with keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/storm.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka@EXAMPLE.COM";
}; Kafka client configuration without keytab, for producers: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
}; Kindly refer this document for detailed explanation : http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/secure-kafka-config-options.html Kindly let me know if it helps. Thanks!
... View more
01-26-2017
08:42 AM
@yjiang did it work, when created as kafka user?
... View more
01-23-2017
06:21 PM
@yjiang Yes, please try creating the topic as kafka user because when we use kafka-topics.sh to create a test topic, what this script does is to create a node in zookeeper path - /broker/topics/test then brokers thread gets notified that a new node is created and broker then creates actual data for topic test that is the metadata and physical data.
But notice that Brokers are kafka/host@REALM, so if a user other than kafka creates a topic it gets permission for example world:anyone:r
sasl:xyz:crdwa So the new node that is created in zookeeper path will have these permissions. Now when the broker gets alerted and tries to create metadata and physical data for this new topic, it wont be able to because broker principal is kafka but topic's is xyz
... View more
01-22-2017
11:55 AM
@yjiang Could you please tell, as which user did you create the topic? Also could you please provide your server.properties file?
... View more
- « Previous
-
- 1
- 2
- Next »