Member since
12-14-2015
17
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1959 | 09-16-2016 07:57 AM | |
1950 | 05-03-2016 10:42 AM |
09-16-2016
07:57 AM
Hi All, I have solved the issue excluding the library "slf4j-log4j12" from package.
... View more
09-13-2016
10:56 AM
1 Kudo
Hello, I'm upgrading a topology from storm version 0.10.0 to 1.0.1 to deploy it on new HDP 2.5.
The topology have a kafka spout and several bolts (hdfs, hive, hbase, socket...).
Some tipical operations, as kafka spout, are packaging in separated library (called "Common") and so also this library is been updated with latest version of hadoop components.
I have an issue when deploy topology, this error appears (by Storm UI) on spout:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.log4j.Log4jLoggerFactory at org.apache.log4j.Logger.getLogger(Logger.java:39) at kafka.utils.Logging$class.logger(Logging.scala:24) at kafka.consumer.SimpleConsumer.logger$lzycompute(SimpleConsumer.scala:35) at kafka.consumer.SimpleConsumer.logger(SimpleConsumer.scala:35) at kafka.utils.Logging$class.info(Logging.scala:75) at kafka.consumer.SimpleConsumer.info(SimpleConsumer.scala:35) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:94) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) at org.apache.storm.kafka.PartitionManager.<init>(PartitionManager.java:94) at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) at org.apache.storm.daemon.executor$fn__6503$fn__6518$fn__6549.invoke(executor.clj:651) at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
Searching the error, I see that is necessary exclude log4j in pom.xml but the results is the same. I have also try to use old library of kafka but nothing is changed.
Following you can find the pom.xml of Common library (where is embedded spout) and topology: COMMON pom.xml: <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.ecube.swarco</groupId>
<artifactId>Common</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<storm.core.version>1.0.1</storm.core.version>
<storm.kafka.version>1.0.1</storm.kafka.version>
<storm.hdfs.version>1.0.1</storm.hdfs.version>
<storm.hive.version>1.0.1</storm.hive.version>
</properties>
<dependencies>
<!-- Storm core Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
<type>jar</type>
</dependency>
<!-- Storm Kafka Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>${storm.kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Storm HDFS Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>${storm.hdfs.version}</version>
<type>jar</type>
</dependency>
<!-- Storm Hive Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hive</artifactId>
<version>${storm.hive.version}</version>
<exclusions>
<exclusion><!-- possible scala confilict -->
<groupId>jline</groupId>
<artifactId>jline</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</project> TOPOLOGY pom.xml: <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.ecube.swarco</groupId>
<artifactId>SignalGroup</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<hadoop.version>2.7.3</hadoop.version>
<zookeeper.version>3.4.6</zookeeper.version>
<kafka.version>0.10.0.1</kafka.version>
<hbase.version>1.1.2</hbase.version>
<storm.core.version>1.0.1</storm.core.version>
<storm.kafka.version>1.0.1</storm.kafka.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.4</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.ecube.swarco.signalgroup.StormTopology</mainClass>
</manifest>
</archive>
<createDependencyReducedPom>true</createDependencyReducedPom>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass></mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<!-- Common Dependencies -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>Common</artifactId>
<version>${project.version}</version>
</dependency>
<!-- Utility Dependencies -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>Utility</artifactId>
<version>${project.version}</version>
</dependency>
<!-- Hadoop Dependencies -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Zookeeper Dependencies -->
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>${zookeeper.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Kafka Dependencies -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Storm core Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>${storm.core.version}</version>
<!-- Only for distribuited mode
<scope>provided</scope>
-->
<scope>provided</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>${project.groupId}</groupId>
<artifactId>Common</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Storm Kafka Dependencies -->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>${storm.kafka.version}</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>${project.groupId}</groupId>
<artifactId>Common</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Hbase Dependencies -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<!-- Other Dependency -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.1</version>
<type>jar</type>
</dependency>
</dependencies>
</project>
Adding that it works if run as local cluster. Could you have any suggestion to solve it? Thanks in advance, Giuseppe
... View more
Labels:
09-12-2016
07:44 AM
Perfect, I will try it. Thank you again
... View more
09-09-2016
03:57 PM
Yes, it work, we can close the thread. Thanks again
... View more
09-09-2016
03:31 PM
Thanks you Artem for clarification, I will proceed using new approach.
... View more
09-09-2016
08:13 AM
Thank you Artem! Yes, I'm working with 2.5, I was wrong to write above... The dependencies are ok but the KafkaConfig don't have
"forceFromStart" properties. In fact, I've checked the KafkaConfig
sources and it's missing, please see https://github.com/apache/storm/blob/master/external/storm-kafka/src/jvm/org/apache/storm/kafka/KafkaConfig.java So, it could be a refuse in documentation. In this case, how I can use this properties if it is not present in kafkaConfig?
... View more
09-08-2016
03:38 PM
1 Kudo
Hi All, We have upgraded our environment from HDP 2.3.4 to HDP 2.4 and so review Storm topologies in order to use 1.0.1 version and new features. In old version, storm-kafka library (0.9.2) included the option "forceFromStart" in SpoutConfig that restore data from beginning (if set to true) or get only current value (if set to false) by kafka. In last version the option is removed and I see that the spout get always data from beginning. How we can replace the functionality in new version? I mean, I want get only the current value from spout, how I can proceed to do it? Thanks in advance,
Giuseppe
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Storm
05-06-2016
03:55 PM
Great article! A question: do you know in which HDP realease will be released storm 1.0? Thanks,
Giuseppe
... View more
05-03-2016
10:42 AM
Hi All, I've solved the issue, it is due to TableName variable declared as static in bolt statement and initialized in constructor. In the method prepare() of bolt, it was "null" in DRPC mode because, I suppose, Storm is stateless and so no pointer to static memory is present. I have changed static to private and now works fine. Thanks anyway.
... View more
05-02-2016
08:49 AM
No Emil, I don't set hbase.rootdir and zookeeper.znode.parent in storm config because, I think, these are read by xml. I try to do it. I have create custom hbaseBolt because I need to customize and apply some operation before to store in hbase and this cannot be do used SimpleHbaseMapper and HbaseBolt integrated in Bolt.
... View more