Member since
06-14-2016
69
Posts
28
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5717 | 07-30-2018 06:45 PM | |
3720 | 06-22-2018 10:28 PM | |
840 | 06-20-2018 04:29 AM | |
775 | 06-20-2018 04:24 AM | |
1891 | 06-15-2018 08:24 PM |
10-31-2022
02:34 AM
some classes from streaming provided by dependencies are overriding with different implementation the solution is to move all classes from package org.apache.kafka to shade my spark version is 2.4.1 here the plugin for maven build < plugin > < groupId > org.apache.maven.plugins </ groupId > < artifactId > maven-shade-plugin </ artifactId > < version > 2.4.3 </ version > < executions > < execution > < phase > package </ phase > < goals > < goal > shade </ goal > </ goals > < configuration > < relocations > < relocation > < pattern > org.apache.kafka </ pattern > < shadedPattern > shade.org.apache.kafka </ shadedPattern > </ relocation > </ relocations > </ configuration > </ execution > </ executions > </ plugin >
... View more
07-30-2018
06:54 PM
@dhieru singh No problem at all, I am glad it helped!
... View more
11-14-2017
05:25 PM
Setting to 6667 worked. Thanks
... View more
08-17-2018
11:15 AM
We are using unsecure Kafka connection and not using MiNiFi, instead of that we are using MQTT. this is our NiFi flow ..... MQTT --> PublishKafka --> ConsumeKafka --> PutCassandraQL. What you suggest to use NiFi flow for below process ... MQTT --> PublishKafka --> ConsumeKafka --> PutCassandraQL
... View more
07-13-2017
01:21 AM
OK, I will share kafka state-change.log and please attention to test-kafka-topic at about 2017-07-12 16:04:00. Kafka state change log is state-change.txt
... View more
06-12-2018
02:01 PM
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. @dbains you should update
... View more
01-31-2018
07:13 PM
Created symlink of ranger kms conf to core site and hdfs site is a vagues statement. Could you explain a little more... I know how to create a symlink, but I don't know what you mean by "Created symlink of ranger kms conf to core site and hdfs site"
... View more
06-29-2017
07:25 AM
3 Kudos
Problem: In HDP 2.3.4 and Ambari 2.4.2, Storm nimbus was not coming up: 2017-05-30 14:35:18.756 o.a.t.s.TThreadPoolServer [ERROR] Error occurred during processing of message.
java.lang.RuntimeException: No nimbus leader participant host found, have you started your nimbus hosts?? Following is the error from Nimbus log: 2017-05-30 14:35:18.750 b.s.d.nimbus [WARN] principal: storm@EXAMPLE.COM is trying to impersonate principal: ambari-server@EXAMPLE.COM
2017-05-30 14:35:18.751 b.s.d.nimbus [WARN] impersonation attempt but nimbus.impersonation.authorizer has no authorizer configured. potential
security risk, please see SECURITY.MD to learn how to configure impersonation authorizer.
2017-05-30 14:35:18.756 o.a.t.s.TThreadPoolServer [ERROR] Error occurred during processing of message.
java.lang.RuntimeException: No nimbus leader participant host found, have you started your nimbus hosts?
Cause: Property ''nimbus.impersonation.authorizer" was set to 'org.apache.storm.security.auth.authorizer.ImpersonationAuthorizer'. Prior to HDP 2.5, the storm package was backtype instead of org.apache. Solution: Modify the property as follows: nimbus.impersonation.authorizer=backtype.storm.security.auth.authorizer.ImpersonationAuthorizer
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- impersonation
- Issue Resolution
- nimbus
- Storm
Labels:
06-29-2017
07:06 AM
3 Kudos
Problem Description: Unable to start storm nimbus from Ambari, throwing following error: raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh ln -s /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-*.jar /usr/hdp/current/storm-nimbus/lib/ambari-metrics-storm-sink.jar' returned 1. ln: target `/usr/hdp/current/storm-nimbus/lib/ambari-metrics-storm-sink.jar' is not a directory Cause: Two legacy jars under storm lib: # ls -ld /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common*
-rw-r--r-- 1 root root 1937430 May 11 15:57 /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-2.4.1.0.22.jar
-rw-r--r-- 1 root root 1933392 Nov 23 2016 /usr/lib/storm/lib/ambari-metrics-storm-sink-legacy-with-common-2.4.2.0.136.jar Solution: As the Ambari version was 2.4.2 so, moved ambari-metrics-storm-sink-legacy-with-common-2.4.1.0.22.jar to a different location and started storm nimbus.
... View more
- Find more articles tagged with:
- ambari-metrics
- Data Ingestion & Streaming
- Issue Resolution
- nimbus
- Storm
Labels:
06-29-2017
06:51 AM
1 Kudo
Following was the error in supervisor log: 2017-06-09 14:40:18.348 o.a.s.d.supervisor [ERROR] Error on initialization of server mk-supervisor
java.lang.RuntimeException: java.lang.ClassNotFoundException: backtype.storm.generated.LSSupervisorId
at org.apache.storm.utils.LocalState.deserialize(LocalState.java:83)
at org.apache.storm.utils.LocalState.get(LocalState.java:130)
at org.apache.storm.local_state$ls_supervisor_id.invoke(local_state.clj:61)
at org.apache.storm.daemon.supervisor$standalone_supervisor$reify__7977.prepare(supervisor.clj:1216)
at org.apache.storm.daemon.supervisor$fn__7833$exec_fn__3537__auto____7834.invoke(supervisor.clj:766)
at clojure.lang.AFn.applyToHelper(AFn.java:160)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:630)
at org.apache.storm.daemon.supervisor$fn__7833$mk_supervisor__7878.doInvoke(supervisor.clj:764)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at org.apache.storm.daemon.supervisor$_launch.invoke(supervisor.clj:1204)
at org.apache.storm.daemon.supervisor$_main.invoke(supervisor.clj:1237)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at org.apache.storm.daemon.supervisor.main(Unknown Source)
Caused by: java.lang.ClassNotFoundException: backtype.storm.generated.LSSupervisorId
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.storm.utils.LocalState.deserialize(LocalState.java:78)
... 14 more
2017-06-09 14:40:18.351 o.a.s.util [ERROR] Halting process: ("Error on initialization")
java.lang.RuntimeException: ("Error on initialization")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341)
at clojure.lang.RestFn.invoke(RestFn.java:423)
at org.apache.storm.daemon.supervisor$fn__7833$mk_supervisor__7878.doInvoke(supervisor.clj:764)
at clojure.lang.RestFn.invoke(RestFn.java:436)
at org.apache.storm.daemon.supervisor$_launch.invoke(supervisor.clj:1204)
at org.apache.storm.daemon.supervisor$_main.invoke(supervisor.clj:1237)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at org.apache.storm.daemon.supervisor.main(Unknown Source)
Cause: 1. Few storm configs were still configured with backtype packages 2. Stale states in zookeeper and local data Solution: 1. Filtered in the configs and changed all the backtype packages to org.apache as Storm from HDP 2.5 onwards uses org.apache packages.
2. Follow below steps to clear stale states and local data:
-> Deactivate all running topologies.
-> Stop Storm service
-> Delete all states under zookeeper: $/usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port)
> rmr /storm -> Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: $ rm -rf <value of stormlocal.dir> -> Start storm service.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- Issue Resolution
- Storm
Labels:
06-28-2017
10:27 PM
2 Kudos
Problem Description: On deleting few kafka topics and creating same topics again there Not Leader for this partition exception while Producing/Consuming and was reading old meta data. Getting Below error: 17/03/10 10:45:35 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leaders for Set([akrnohij-wng-fp9,47], [akrnohij-wng-fp4,224], [akrnohij-wng-fp11,172], [akrnohij-wng-fp1,84], [akrnohij-wng-fp10,117], [akrnohij-wng-fp10,168], [akrnohij-wng-fp4,176], [akrnohij-wng-fp1,136], [akrnohij-wng-fp9,167], [akrnohij-wng-fp2,174], [akrnohij-wng-fp11,74], [akrnohij-wng-fp3,189], [akrnohij-wng-fp11,200], [akrnohij-wng-fp11,168], [akrnohij-wng-fp4,149], [akrnohij-wng-fp7,127], [akrnohij-wng-fp6,39], [akrnohij-wng-fp10,133], [akrnohij-wng-fp9,171], [akrnohij-wng-fp5,175], [akrnohij-wng-fp7,181] Cause: The zookeeper ACL for state znode was not correct : getAcl /brokers/topics/akrnohij-wng-fp4/partitions/135/state
‘auth,’
: cdrwa,
‘world,’anyone
:r The following property was set in zookeeper env : -Dzookeeper.skipACL=yes Solution:
Deleted the topic manually as deletion was stuck Removed the property -Dzookeeper.skipACL=yes Restart zookeeper and Kafka services NOTE- It is dangerous to use -Dzookeeper.skipACL=yes property and instead it is recommended to use kafka service principal if in case you need to delete the znode for kafka topics.
... View more
- Find more articles tagged with:
- Cloud & Operations
- FAQ
- Issue Resolution
- Kafka
- zookeeper-acl
Labels:
06-27-2017
06:44 AM
2 Kudos
PROBLEM: While implementing AutoHDFS for storm-hdfs integration, observed following error: 2017-05-19 11:21:44.865 o.a.s.h.c.s.AutoHDFS [ERROR] Could not populate HDFS credentials.
java.lang.RuntimeException: Failed to get delegation tokens.
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:242)
at org.apache.storm.hdfs.common.security.AutoHDFS.populateCredentials(AutoHDFS.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__11226.submitTopologyWithOpts(nimbus.clj:1544)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2940)
at org.apache.storm.generated.Nimbus$Processor$submitTopologyWithOpts.getResult(Nimbus.java:2924)
at org.apache.storm.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.storm.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.storm.security.auth.SaslTransportPlugin$TUGIWrapProcessor.process(SaslTransportPlugin.java:138)
at org.apache.storm.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2214)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2746)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2759)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.storm.hdfs.common.security.AutoHDFS$1.run(AutoHDFS.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1704)
at org.apache.storm.hdfs.common.security.AutoHDFS.getHadoopCredentials(AutoHDFS.java:213)
... 17 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2120)
CAUSE: Missing Hadoop dependencies in pom.xml caused the following exception: Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
SOLUTION: To resolve this issue, added the following dependency in pom.xml: <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.3.2.5.3.0-37</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- HDFS
- Issue Resolution
- Storm
Labels:
03-26-2017
09:18 PM
SYMPTOM: While integrating Storm 1.0.1 to Elastic Search 5.0.0, the following error is observed: Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/base/Preconditions
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:62)
at org.apache.storm.elasticsearch.common.EsConfig.<init>(EsConfig.java:49)
at com.mz.pipeline.StreamToES5_1.main(StreamToES5_1.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.base.Preconditions
Following were the maven dependencies: <dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-elasticsearch</artifactId>
<version>1.0.2</version>
</dependency>
ROOT CAUSE: Apache libraries were used to compile/package the topology instead of using Hortonworks repository
RESOLUTION: Please add the Hortonworks repository in the pom.xml:
<repositories>
<repository>
<id>hortonworks</id>
<url>http://repo.hortonworks.com/content/groups/public/</url>
</repository>
</repositories>
Then change the storm artifact versions, which should be in this format: <apache_version>.<HDP-version>
For example, for 'storm-core' in HDP 2.5.0.0, version would be 1.0.1.2.5.0.0-1245. Similarly, for 'storm-elasticsearch' it would be: 1.0.1.2.5.0.0-1245 (for HDP 2.5.0.0). Please find the version corresponding to your HDP here: http://repo.hortonworks.com/content/groups/public/org/apache/storm/
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- ElasticSearch
- Issue Resolution
- Storm
Labels:
03-24-2017
06:13 PM
ROOT CAUSE:kafka.metrics.reporters in Advanced Kafka-broker was pointing to Ganglia metrics reporter. kafka.metrics.reporters=kafka.ganglia.KafkaGangliaMetricsReporter RESOLUTION:
Kafka.metrics.reporters is pointing to GangliaMetrics. It should be pointing to Ambari metrics like this:
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
You can modify the property via Ambari -> Kafka -> configs -> Advanced kafka-broker-> kafka.metrics.reporters and modify the
property. Please save the changes and restart required services
... View more
- Find more articles tagged with:
- Issue Resolution
- Kafka
Labels:
03-24-2017
06:06 PM
1 Kudo
SYMPTOM: Storm nimbus fails to come up after Ambari was upgraded to 2.4 version while on HDP 2.4 2016-11-22 13:29:43.066 [timer] b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:961) ~[clojure-1.6.0.jar:?]
at clojure.lang.Numbers.isZero(Numbers.java:90) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$partition_fixed.invoke(util.clj:892) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:626) ~[clojure-1.6.0.jar:?]
at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val$iter__366__370$fn__371.invoke(util.clj:301) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] ROOT CAUSE: It is a known issue: https://hortonworks.jira.com/browse/BUG-66735
RESOLUTION: It has been fixed in HDP 2.5.3, please refer section 'Upgrade' here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/fixed_issues.html
WORKAROUND: This can be resolved by following below steps as a workaround: 1. Deactivate all running topologies.
2. Stop Storm service
3. Delete all states under zookeeper: -> /usr/hdp/current/zookeeper-client/bin/zkCli.sh (optionally in secure environment specify -server zk.server:port) -> rmr /storm
4. Delete all states under the storm-local directory. Please make sure to run this on all storm hosts: rm -rf <value of stormlocal.dir>
5. Start storm service.
... View more
- Find more articles tagged with:
- Ambari
- Issue Resolution
- issue-resolution
- nimbus
- Storm
- upgrade
Labels:
03-24-2017
05:59 PM
1 Kudo
We can configure multiple listeners by giving comma-separated list of URIs that Kafka will listen on. Please follow the steps below to implement this: 1. Add the listeners as comma separated value in Ambari ->kafka->configs->listeners, for example: listeners=PLAINTEXT://myhost:6667, PLAINTEXTSASL://myhost:6668 2. Add ACL for 'Anonymous' user, because In PLAINTEXT connections user's identity is set to Anonymous. For example : $ bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=ambari-server:2181 --add --allow-principal User:Anonymous --producer --topic topic-oct 3. Run the producer with security protocol set to PLAINTEXT to listen to PLAINTEXT and set it to PLAINTEXTSASL to listen to other listener, something like this: $ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6667 --topic topic-oct --security-protocol PLAINTEXT
$ bin/kafka-console-producer.sh --broker-list ambari-server.support.com:6668 --topic topic-oct --security-protocol PLAINTEXTSASL
Kindly replace broker hostname:port, zookeeper hostname:port and topic names according to the values configured in your cluster. Note- This is only supported in HDP 2.3.4+. It is not available in prior versions.
... View more
- Find more articles tagged with:
- How-ToTutorial
- Kafka
- Sandbox & Learning
- sasl
Labels:
03-24-2017
05:51 PM
1 Kudo
PROBLEM: Enable GC logging for Zookeeper
SOLUTION: When Using Ambari web UI:
1. Click on the Zookeeper Service
2. Click on Configs tab
3. Navigate to 'Advanced Zookeeper-env' 4. Locate the setting 'zookeeper-env template' 5. Append the following to 'export SERVER_JVMFLAGS=-Xmx1024m' :
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOO_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
To be precise it should look like : export SERVER_JVMFLAGS="-Xmx1024m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$ZOOKEEPER_LOG_DIR/zookeeper.gc.`date +'%Y%m%d%H%M'`"
6. Save the changes and restart the Zookeeper service when prompted
When the cluster is managed outside Ambari:
1. On application cluster node open zookeeper-env.sh
2. Append the above mentioned parameter to SERVER_JVMFLAGS values. You can find the zookeeper-env.sh at /etc/zookeeper/conf/
... View more
- Find more articles tagged with:
- Cloud & Operations
- gce
- How-ToTutorial
- Zookeeper
Labels:
03-24-2017
05:41 PM
SYMPTOMS: When using the SelectHiveQL processor in Nifi to run hive queries, it does not use the specific yarn queue ('nifi') if it is configured as in the following HiveConnectionPool settings: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;tez.queue.name=nifi
Following is an example of queue setting in yarn capacity scheduler: yarn.scheduler.capacity.root.queues=default,nifi
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.capacity=40
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.nifi.acl_administer_queue=*
yarn.scheduler.capacity.root.nifi.acl_submit_applications=*
yarn.scheduler.capacity.root.nifi.capacity=60
yarn.scheduler.capacity.root.nifi.maximum-capacity=60
yarn.scheduler.capacity.root.nifi.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.nifi.ordering-policy=fifo
yarn.scheduler.capacity.root.nifi.state=RUNNING
yarn.scheduler.capacity.root.nifi.user-limit-factor=1 RESOLUTION: Please add tez.queue.name with a question mark like this: ?tez.queue.name=<queue_name> For example: jdbc:hive2://<server>:<port>;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;principal=hive/_HOST@DOMAIN.COM;?tez.queue.name=nifi
... View more
- Find more articles tagged with:
- Data Processing
- Issue Resolution
- NiFi
- nifi-hive
- nifi-processor
- tez
Labels:
01-23-2018
11:27 PM
Resolution didn't work. I tried export SPARK_HOME="/usr/hdp/current/spark2-client" export SPARK_MAJOR_VERSION=2 kinit sa_seed_ld@IHGINT.GLOBAL -kt /etc/seed_ld.keytab /usr/hdp/current/spark2-client/bin/spark-submit \ --verbose \ --master yarn \ --jars /usr/google/gcs/lib/gcs-connector-latest-hadoop2.jar \ --deploy-mode client \ --num-executors 10 \ --executor-memory 2G \ --executor-cores 2 \ --class com.abc.sample.DirectStreamConsumer \ --conf "spark.driver.allowMultipleContexts=true" \ --files "kafka_client_jaas.conf,/etc/seed_ld.keytab" \ --driver-java-options "-Djava.security.auth.login.config=kafka_client_jaas.conf" \ --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=kafka_client_jaas.conf" \ --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties" \ --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j-spark.properties" \ --driver-java-options "-Dsun.security.krb5.debug=true -Dsun.security.spnego.debug=true" \ sample-0.0.1-SNAPSHOT.jar My jaas and above command are in same directory.
... View more
07-20-2016
12:21 AM
I believe this is the link :https://hortonworks.my.salesforce.com/kA2E0000000fxdx?srPos=0&srKp=ka2〈=en_US
... View more