Member since
04-10-2014
18
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5227 | 05-14-2014 01:04 AM |
11-21-2016
02:34 AM
Hello ! Also new in cloudera, i want to try banana and i put a war file in opt/cloudera/parcels/CDH/lib/solr/webapp but nothing happened when i navigated to it with my web browser. I think i miss something. I used the command line: jar -cvf ../banana.war * to create my war file. Did anyone succeded to install banana on a cloudera distribution? Many thanks
... View more
06-28-2016
08:12 AM
1 Kudo
I am also having the same problem after I created directory and move the jar to the plugin folder: Here is the details: You can see I have the .jar file in the right lib folder: My environment variables are: export FLUME_HOME=/opt/flume
export FLUME_CONF_DIR=$FLUME_HOME/conf
export FLUME_CLASSPATH=$FLUME_HOME_DIR
export PATH=$FLUME_HOME/bin:$PATH and I am doing this under root user, below is the error message: 16/06/28 10:48:21 ERROR node.PollingPropertiesFileConfigurationProvider: Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Unable to load source type: com.cloudera.flume.source.TwitterSource, class: com.cloudera.flume.source.TwitterSource
at org.apache.flume.source.DefaultSourceFactory.getClass(DefaultSourceFactory.java:67)
at org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:40)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:327)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.cloudera.flume.source.TwitterSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.flume.source.DefaultSourceFactory.getClass(DefaultSourceFactory.java:65) Can anyone shed any light on this with me? Thank you very much.
... View more
03-07-2016
10:58 AM
1 Kudo
I ran into this error, and it was caused by running out of heap size for Nodemanager. I increased the heap, and Yarn came up without errors.
... View more
07-03-2014
12:16 AM
1 Kudo
Hi, to read data in avro format from Hive you have to use an Avro SerDe. Maybe a good starting point will be http://www.michael-noll.com/blog/2013/07/04/using-avro-in-mapreduce-jobs-with-hadoop-pig-hive/ But this is not related to this topic since the solr sink will put data into Solr. I'd suggest to use just a HDFS sink to put your data on HDFS and create an (external or not) Hive table afterwards. You do not need Solr and/or Morphlines for this. best, Gerd
... View more
07-01-2014
02:18 AM
We just need to use Hive to create the impala table...
... View more
06-27-2014
12:32 AM
Yep, the keytab created had not the correct permission, I forgot it !
... View more
05-14-2014
01:04 AM
It seems that the base pattern is mandatory ! Even if it is not specidfied in the documentation 🙂 So I added the base pattern "dc=example,dc=com" and it worked.
... View more
04-29-2014
08:28 AM
6 Kudos
Hello, You can access ZooKeeper via several methods, but the easiest is to use the 'hbase zkcli' command from one of the servers running an HBase service. Once in the ZooKeeper CLI, you can run 'rmr /hbase' to remove the znode. As you have found, this is a necessary step if you are removing and reinstalling HBase using the same ZooKeeper.
... View more