Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14830 | 01-18-2018 08:38 AM | |
1574 | 05-11-2017 06:50 PM | |
9195 | 04-28-2017 11:00 AM | |
3438 | 04-12-2017 01:36 AM | |
2841 | 02-14-2017 05:11 AM |
02-12-2014
06:29 AM
Nice, many thanks, Clint. Looking forward to your feedback.....br, Gerd
... View more
02-12-2014
04:54 AM
Hi, I'm asking myself where to put security related concerns/questions and would suggest to create a seperate heading for that. Or is there an according section already, but I didn't find it ?!? I'm thinking of topics like "secure logging", "auditing", "traceability", etc br, Gerd
... View more
02-04-2014
04:15 AM
1 Kudo
Solved ! The repo URL was missing the part "/latest" at the end => http://archive.cloudera.com/spark/parcels/latest
... View more
02-04-2014
03:06 AM
Hi, I tried to install Spark via parcel as stated here: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_spark_installation_standalone.html but after adding/saving the repo http://archive.cloudera.com/spark/parcels ClouderaManager doesn't show any Spark parcel to download/install ?!?! I'm using CM4.8.1, any hints what I am missing? br...Gerd...
... View more
Labels:
- Labels:
-
Apache Spark
01-27-2014
05:41 AM
FYI: after resolving some port conflicts all problems are gone and Accumulo is up and running
... View more
01-27-2014
02:33 AM
Hi Sean, many thanks for your reply. I tried adding the Accumulo1.4.3 parcel repo to CM, but after adding the remote parcel url (http://archive.cloudera.com/accumulo/parcels/1.4.3/) and clicking on the parcel icon I just get: ACCUMULO 1.4.3-cdh4.3.0 Unavailable This is most probably due to the CDH version, I'm running CDH 4.5.0 ?!?! Is there a parcel repo available which is working in CDH4.5.0 (CM version 4.8) ? Nevertheless, after adding $HADOOP_PREFIX/lib/hadoop/lib/.*.jar, $HADOOP_PREFIX/lib/hadoop/client/.*.jar, to accumulo-site.xml I was able to proceed with executing "su hdfs ./bin/accumulo init". After starting accumulo (./bin/start-all.sh) as root it is even possible to login to the accumulo shell via "./bin/accumulo shell -u root". Regrettably the web interface isn't working => "http://accumulomaster:50095/status" gives me "webpage not available" and a "telnet accumulomaster 50095" produces: "telnet: Unable to connect to remote host: Connection refused" Any hints where I can further check what happened to the web ui ? In the output of "accumulo classpath" command there are many lines including "/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/" on level 3, thereby I assume the config should be fine... best regards..: Gerd :..
... View more
01-23-2014
12:50 PM
Hi,
I just wanted to install Accumulo (1.5.0 from tar.gz) on a Hadoop cluster (latest CDH4.5 parcel) managed by CM (4.8).
How do I have to set the HADOOP_HOME, ZOOKEEPER_HOME and HADOOP_CONF_DIR properties accordingly, to be able to start Accumulo in this CM-managed environment.
My current config looks like:
HADOOP_PREFIX=/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30
HADOOP_CONF_DIR="/etc/hadoop/conf"
ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper
...but after executing "./bin/accumulo init" I get the following error:
"
root@hadoop-pg-2:~/accumulo/accumulo-1.5.0# bin/accumulo init Uncaught exception: null java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.accumulo.start.Main.main(Main.java:41) Caused by: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at org.apache.commons.vfs2.impl.DefaultFileSystemManager.<init>(DefaultFileSystemManager.java:120) at org.apache.accumulo.start.classloader.vfs.FinalCloseDefaultFileSystemManager.<init>(FinalCloseDefaultFileSystemManager.java:21) at org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.generateVfs(AccumuloVFSClassLoader.java:227) at org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.getClassLoader(AccumuloVFSClassLoader.java:201) ... 5 more Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at org.apache.accumulo.start.classloader.AccumuloClassLoader$2.loadClass(AccumuloClassLoader.java:241) at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
"
What's going wrong here?
I'm trying the install instructions from http://accumulo.apache.org/1.5/accumulo_user_manual.html#_installation
thanks...Gerd...
... View more
Labels:
- Labels:
-
Apache Accumulo
01-23-2014
12:41 PM
Clint, thank you very much.
... View more
01-21-2014
12:00 AM
Hi Clint, many thanks for your very helpful answer and the brilliant blog post about HBase repl. There's just one more question: If Cloudera Enterprise is no option ($$$) and the synchronisation needs to be done on the storage layer, is a repetition of calling distcp an appropriate low-cost solution, or how would you tackle this problem ? br...: Gerd :....
... View more
01-20-2014
04:00 AM
Hi, thinking of having two datacenters and the requirement of having a cluster surviving the failure of a whole datacenter, what would be the preferred setup? a) ONE Hadoop cluster spanned over both data centers, or b) TWO independent Hadoop clusters with (somehow) synced data Questions: it seems obvious for option a) that the interconnection between the data centers needs to be veeery good, at least 1GBit ?!? is it possible to configure Hadoop to replicate blocks to different data centers, in precedence of replicating to different racks via the rack topology script ? if option b) is chosen, how can an automatic,continous data replication between the two clusters be established (are there tools for this) ? what are the main considerations, recommendations for the initially mentioned requirement ? many thanks in advance...Gerd...
... View more
Labels:
- Labels:
-
Apache Hadoop