Member since
02-19-2016
158
Posts
69
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1281 | 11-29-2017 08:47 PM | |
1691 | 10-24-2017 06:37 PM | |
17157 | 08-04-2017 06:58 PM | |
1834 | 05-15-2017 06:42 PM | |
2236 | 03-27-2017 06:36 PM |
01-27-2017
08:07 PM
As Josh already mentioned, the best way to deploy HDP clusters is Ambari. Bigtop is a packaging project that has some ability to deploy unmanaged cluster for testing purposes. Ambari on the other hand is the cluster managing software. It may come that one day we will see Ambari as one of the packages in Bigtop, but even in this case the deployment of cluster will be done by Ambari blueprints.
... View more
01-23-2017
09:25 PM
If I correctly understand, you need to change :part_key to :key.
... View more
01-22-2017
07:24 PM
2 Kudos
try to turn on the java kerberos logging by adding -Dsun.security.krb5.debug=true to the HADOOP_OPTS. Usually it helps to understand what exactly fails during GSS initialization.
... View more
01-19-2017
05:20 AM
1 Kudo
I checked the script more carefully. It looks like adding export CLASSPATH=$CLASSPATH:/etc/hbase/conf to zeppelin-env.sh using Zeppelin configuration in Ambari would be enough.
... View more
01-18-2017
11:22 PM
4 Kudos
@Qi Wang Well, to fix it you will need to proceed few steps: 1. In the jdbc interpreters configuration you need to remove default artifacts for phoenix and hbase (phoenix-4.7... and hbase-1.1.3...). 2. add new artifact for phoenix-client.jar (just provide the path /usr/hdp/current/phoenix-client/phoenix-client.jar) 3. You need to set up interpreter to properly find hbase configuration dir. It could be done by adding to the end of zeppelin-env.sh export CLASSPATH=$CLASSPATH:/etc/hbase/conf using Ambari and after restart of Zeppelin service everything is supposed to work
... View more
01-18-2017
09:05 PM
I bet that HBase is healthy. Was able to reproduce it locally. The same exception running test from zeppelin with perfectly working sqlline (as well as hbase shell).
... View more
01-04-2017
06:33 PM
@Bala Vignesh N V what do you mean by 'Fixed width file' ? Can you give a more detailed example?
... View more
12-23-2016
01:12 AM
Could you please explain more why distcp hftp -> hdfs didn't work? This is the recommended way to copy between different versions of hadoop. And it should be running on destination cluster.
... View more
12-23-2016
12:53 AM
2 Kudos
@Karan Alang Here is the documentation from javac classpath: Classpath entries that are neither directories nor archives (.zip or .jar files) nor * are ignored. So, you should use just a simple '*' instead of '*.jar'. I believe hbase-client/lib/* would be enough. You also may use --verbose to track what javac is doing.
... View more
12-21-2016
12:02 AM
1 Kudo
I would also add to Ted's comment that netty-3.2.4 (which is required by hadoop-hdfs) and netty-all-4.0.23 are not conflicting because 3.2.4 is the old version with org.jboss.netty package while 4.0.23 has io.netty package. Actually they even have different artifact IDs (netty vs netty-all). So it's safe to use both of them in the same project.
... View more