Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

SLF4J bindings Warning logging accumulo shell on Cloudera QuickStart VM

avatar
Explorer

I have just sucessfuly installed accumulo-1.4.3 on cloudera CDH4 QuickStart.

 

Each time I log in to the accumulo shell I get the following warning:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/client-0.20/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/client-0.20/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

 

The website states:

 


SLF4J warning or error messages and their meanings
The method o.a.commons.logging.impl.SLF4FLogFactory#release was invoked.

Given the structure of the commons-logging API, in particular as implemented by SLF4J, the o.a.commons.logging.impl.SLF4FLogFactory#release() method should never be called. However, depending on the deployment of commons-logging.jar files in your servlet container, release() method may be unexpectedly invoked by a copy of org.apache.commons.logging.LogFactory class shipping with commons-logging.jar.

This is a relatively common occurrence with recent versions of Tomcat, especially if you place jcl-over-slf4j.jar in WEB-INF/lib directory of your web-application instead of $TOMCAT_HOME/common/lib, where $TOMCAT_HOME stands for the directory where Tomcat is installed. In order to fully benefit from the stability offered by jcl-over-slf4j.jar, we recommend that you place jcl-over-slf4j.jar in $TOMCAT_HOME/common/lib without placing a copy in your web-applications.

Please also see bug #22.

 

I don't want to break anything; how do I fix this?

 

thanks,

 

Chris

 

 

4 ACCEPTED SOLUTIONS

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
8 REPLIES 8

avatar
Expert Contributor

In this case, the warning is benign. It's caused by the inclusion of a symlink from a versionless file name to a versioned file name in the CDH4 hadoop client directory. Since they point to the same jar, you can safely ignore them.

 

You can confirm this by doing a long listing on the directory mentioned in the error message:

 

[accumulo@localhost accumulo]$ ls -lah /usr/lib/hadoop/client-0.20/slf4j-log4j*
lrwxrwxrwx. 1 root root 43 Oct 7 08:33 /usr/lib/hadoop/client-0.20/slf4j-log4j12-1.6.1.jar -> /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar
lrwxrwxrwx. 1 root root 43 Oct 7 08:33 /usr/lib/hadoop/client-0.20/slf4j-log4j12.jar -> /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar
[accumulo@localhost accumulo]$

 

You can clear up the warning by modifying your accumulo-site.xml to use a regex in general.classpaths that will only match files with version numbers in the name:

 

    <property>
      <name>general.classpaths</name>
      <value>
	  $ACCUMULO_HOME/lib/[^.].$ACCUMULO_VERSION.jar,
	  $ACCUMULO_HOME/lib/[^.].*.jar,
	  $ZOOKEEPER_HOME/zookeeper[^.].*-[0-9].*.jar,
	  $HADOOP_CONF_DIR,
	  $HADOOP_CLIENT_HOME/[^.].*-[0-9].*.jar,
          $HADOOP_MAPRED_HOME/[^.].*-[0-9].*.jar,
	  $HADOOP_MAPRED_HOME/lib/[^.].*.jar,
      </value>
    </property>

 

Note in particular, the addition of "-[0-9].*" to the regex for matching jars in the hadoop client directory. It may help to view this change in the larger context of configuration files for working with the quickstart VM.

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor

Yes. Those entries are so you can test changes to the Accumulo code by recompiling and restarting Accumulo (i.e. skipping the step of building jars and deploying them).

 

I generally recommend never including those paths in Accumulo's classpath because

  1. They should never be present in a production deployment and it's a bad idea to intentionally introduce differences between dev and production (in this case being packaged in jars)
  2. The main code base doesn't build unless you use the package goal anyways, so you'll already have jars ready to go
  3. If you are building directly within your Accumulo installation (which you should not do), the default package goal will give you your changes loaded upon restart just as easily, because the package goal will have put the repackaged Accumulo jars into the lib directory.

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Expert Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Explorer
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
New Contributor

I am getting this error while running MapReduce program on Cloudera VM:

 

SLF4J: Falild to load class "org.slf4j.impl.StaticLoggerBinder".

Defaulting to no-operation (NOP) logger implementation.

Unable to load native-hadoop library for your platform.

 

Can someone help me to fix this.

avatar
Expert Contributor

Please open a new discussion thread for your issue. Older solved threads are unlikely to receive an appropriate amount of attention.

 

I'd recommend you post your MapReduce issue over in the batch processing forum. Be sure to include you version of CDH, a complete stack trace, and the command you used to launch the job.