Member since
06-14-2016
69
Posts
28
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7017 | 07-30-2018 06:45 PM | |
5528 | 06-22-2018 10:28 PM | |
1425 | 06-20-2018 04:29 AM | |
1408 | 06-20-2018 04:24 AM | |
2893 | 06-15-2018 08:24 PM |
06-28-2017
10:27 PM
2 Kudos
Problem Description: On deleting few kafka topics and creating same topics again there Not Leader for this partition exception while Producing/Consuming and was reading old meta data. Getting Below error: 17/03/10 10:45:35 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set([akrnohij-wng-fp9,47], [akrnohij-wng-fp4,224], [akrnohij-wng-fp11,172], [akrnohij-wng-fp1,84], [akrnohij-wng-fp10,117], [akrnohij-wng-fp10,168], [akrnohij-wng-fp4,176], [akrnohij-wng-fp1,136], [akrnohij-wng-fp9,167], [akrnohij-wng-fp2,174], [akrnohij-wng-fp11,74], [akrnohij-wng-fp3,189], [akrnohij-wng-fp11,200], [akrnohij-wng-fp11,168], [akrnohij-wng-fp4,149], [akrnohij-wng-fp7,127], [akrnohij-wng-fp6,39], [akrnohij-wng-fp10,133], [akrnohij-wng-fp9,171], [akrnohij-wng-fp5,175], [akrnohij-wng-fp7,181] Cause: The zookeeper ACL for state znode was not correct : getAcl /brokers/topics/akrnohij-wng-fp4/partitions/135/state
‘auth,’
: cdrwa,
‘world,’anyone
:r The following property was set in zookeeper env : -Dzookeeper.skipACL=yes Solution:
Deleted the topic manually as deletion was stuck Removed the property -Dzookeeper.skipACL=yes Restart zookeeper and Kafka services NOTE- It is dangerous to use -Dzookeeper.skipACL=yes property and instead it is recommended to use kafka service principal if in case you need to delete the znode for kafka topics.
... View more
Labels:
06-27-2017
06:44 AM
2 Kudos
PROBLEM: While implementing AutoHDFS for storm-hdfs integration, observed following error: 2017-05-19 11:21:44.865 o.a.s.h.c.s.AutoHDFS [ERROR] Could not populate HDFS credentials.
java.lang.RuntimeException: Failed to get delegation tokens.
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:242)
at org.apache.storm.hdfs.common.security.AutoHDFS.populateCredentials(AutoHDFS.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__11226.submitTopologyWithOpts(nimbus.clj:1544)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2940)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2924)
at org.apache.storm.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.storm.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.storm.security.auth.SaslTransportPlugin$TUGIWrapProcessor.process(SaslTransportPlugin.java:138)
at org.apache.storm.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2214)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2746)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2759)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.storm.hdfs.common.security.AutoHDFS$1.run(AutoHDFS.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:213)
... 17 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2120)
CAUSE: Missing Hadoop dependencies in pom.xml caused the following exception: Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
SOLUTION: To resolve this issue, added the following dependency in pom.xml: <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
... View more
Labels:
03-26-2017
09:18 PM
SYMPTOM: While integrating Storm 1.0.1 to Elastic Search 5.0.0, the following error is observed: Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/base/Preconditions
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:62)
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:49)
at com.mz.pipeline.StreamToES5_1.main(StreamToES5_1.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.base.Preconditions
Following were the maven dependencies: <dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-elasticsearch</artifactId>
<version>1.0.2</version>
</dependency>
ROOT CAUSE: Apache libraries were used to compile/package the topology instead of using Hortonworks repository
RESOLUTION: Please add the Hortonworks repository in the pom.xml:
<repositories>
<repository>
<id>hortonworks</id>
<url>http://repo.hortonworks.com/content/groups/public/</url>
</repository>
</repositories>
Then change the storm artifact versions, which should be in this format: <apache_version>.<HDP-version>
For example, for 'storm-core' in HDP 2.5.0.0, version would be 1.0.1.2.5.0.0-1245. Similarly, for 'storm-elasticsearch' it would be: 1.0.1.2.5.0.0-1245 (for HDP 2.5.0.0). Please find the version corresponding to your HDP here: http://repo.hortonworks.com/content/groups/public/org/apache/storm/
... View more
Labels:
03-24-2017
06:13 PM
ROOT CAUSE:kafka.metrics.reporters in Advanced Kafka-broker was pointing to Ganglia metrics reporter. kafka.metrics.reporters=kafka.ganglia.KafkaGangliaMetricsReporter RESOLUTION:
Kafka.metrics.reporters is pointing to GangliaMetrics. It should be pointing to Ambari metrics like this:
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
You can modify the property via Ambari -> Kafka -> configs -> Advanced kafka-broker-> kafka.metrics.reporters and modify the
property. Please save the changes and restart required services
... View more
Labels:
03-24-2017
06:06 PM
1 Kudo
SYMPTOM: Storm nimbus fails to come up after Ambari was upgraded to 2.4 version while on HDP 2.4 2016-11-22 13:29:43.066 [timer] b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:961) ~[clojure-1.6.0.jar:?]
at clojure.lang.Numbers.isZero(Numbers.java:90) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$partition_fixed.invoke(util.clj:892) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:626) ~[clojure-1.6.0.jar:?]
at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val$iter__366__370$fn__371.invoke(util.clj:301) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] ROOT CAUSE: It is a known issue: https://hortonworks.jira.com/browse/BUG-66735
RESOLUTION: It has been fixed in HDP 2.5.3, please refer section 'Upgrade' here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/fixed_issues.html
WORKAROUND: This can be resolved by following below steps as a workaround: 1. Deactivate all running topologies.
2. Stop Storm service
3. Delete all states under zookeeper: -> /usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port) -> rmr /storm
4. Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: rm -rf <value of stormlocal.dir>
5. Start storm service.
... View more
Labels:
03-24-2017
05:59 PM
1 Kudo
We can configure multiple listeners by giving comma-separated list of URIs that Kafka will listen on. Please follow the steps below to implement this: 1. Add the listeners as comma separated value in Ambari ->kafka->configs->listeners, for example: listeners=PLAINTEXT://myhost:6667, PLAINTEXTSASL://myhost:6668 2. Add ACL for 'Anonymous' user, because In PLAINTEXT connections user's identity is set to Anonymous. For example : $ bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=ambari-server:2181 --add --allow-principal User:Anonymous --producer --topic topic-oct 3. Run the producer with security protocol set to PLAINTEXT to listen to PLAINTEXT and set it to PLAINTEXTSASL to listen to other listener, something like this: $ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6667 --topic topic-oct --security-protocol PLAINTEXT
$ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6668 --topic topic-oct --security-protocol PLAINTEXTSASL
Kindly replace broker hostname:port, zookeeper hostname:port and topic names according to the values configured in your cluster. Note- This is only supported in HDP 2.3.4+. It is not available in prior versions.
... View more
Labels:
03-24-2017
05:51 PM
1 Kudo
PROBLEM: Enable GC logging for Zookeeper
SOLUTION: When Using Ambari web UI:
1. Click on the Zookeeper Service
2. Click on Configs tab
3. Navigate to 'Advanced Zookeeper-env' 4. Locate the setting 'zookeeper-env template' 5. Append the following to 'export SERVER_JVMFLAGS=-Xmx1024m' :
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOO_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
To be precise it should look like : export SERVER_JVMFLAGS="-Xmx1024m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOOKEEPER_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
6. Save the changes and restart the Zookeeper service when prompted
When the cluster is managed outside Ambari:
1. On application cluster node open zookeeper-env.sh
2. Append the above mentioned parameter to SERVER_JVMFLAGS values. You can find the zookeeper-env.sh at /etc/zookeeper/conf/
... View more
Labels:
03-24-2017
05:41 PM
SYMPTOMS: When using the SelectHiveQL processor in Nifi to run hive queries, it does not use the specific yarn queue ('nifi') if it is configured as in the following HiveConnectionPool settings: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;tez.queue.name=nifi
Following is an example of queue setting in yarn capacity scheduler: yarn.scheduler.capacity.root.queues=default,nifi
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.capacity=40
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.nifi.acl_administer_queue=*
yarn.scheduler.capacity.root.nifi.acl_submit_applications=*
yarn.scheduler.capacity.root.nifi.capacity=60
yarn.scheduler.capacity.root.nifi.maximum-capacity=60
yarn.scheduler.capacity.root.nifi.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.nifi.ordering-policy=fifo
yarn.scheduler.capacity.root.nifi.state=RUNNING
yarn.scheduler.capacity.root.nifi.user-limit-factor=1 RESOLUTION: Please add tez.queue.name with a question mark like this: ?tez.queue.name=<queue_name> For example: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;?tez.queue.name=nifi
... View more
Labels:
01-23-2018
11:27 PM
Resolution didn't work. I tried export SPARK_HOME="/usr/hdp/current/spark2-client" export SPARK_MAJOR_VERSION=2 kinit sa_seed_ld@IHGINT.GLOBAL -kt /etc/seed_ld.keytab /usr/hdp/current/spark2-client/bin/spark-submit \ --verbose \ --master yarn \ --jars /usr/google/gcs/lib/gcs-connector-latest-hadoop2.jar \ --deploy-mode client \ --num-executors 10 \ --executor-memory 2G \ --executor-cores 2 \ --class com.abc.sample.DirectStreamConsumer \ --conf "spark.driver.allowMultipleContexts=true" \ --files "kafka_client_jaas.conf,/etc/seed_ld.keytab" \ --driver-java-options "-Djava.security.auth.login.config=kafka_client_jaas.conf" \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas.conf" \ --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties" \ --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties" \ --driver-java-options "-Dsun.security.krb5.debug=true -Dsun.security.spnego.debug=true" \ sample-0.0.1-SNAPSHOT.jar My jaas and above command are in same directory.
... View more
07-20-2016
12:21 AM
I believe this is the link :https://hortonworks.my.salesforce.com/kA2E0000000fxdx?srPos=0&srKp=ka2〈=en_US
... View more
- « Previous
-
- 1
- 2
- Next »