Member since
02-19-2016
158
Posts
69
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
666 | 11-29-2017 08:47 PM | |
926 | 10-24-2017 06:37 PM | |
10986 | 08-04-2017 06:58 PM | |
1009 | 05-15-2017 06:42 PM | |
1217 | 03-27-2017 06:36 PM |
01-31-2017
06:49 PM
@Jeff Watson Have you tried to log the generated upsert statement and execute it manually using sqlline? As for the versions of libraries, you need to keep in mind that Phoenix in HDP has number of patches and features comparing to the Apache Phoenix 4.7 release. If you are building your application for HDP I would suggest to use the corresponding libraries (whether it's Phoenix or HBase or Hadoop) that are located at http://nexus-private.hortonworks.com/nexus/content/groups/public If you are using maven, I would suggest to add it to the repositories section in your pom.xml : <repository>
<id>public</id>
<name>hwx</name>
<url>http://nexus-private.hortonworks.com/nexus/content/groups/public/</url>
</repository> and use the corresponding versions of phoenix/hbase/hadoop libraries. For example hadoop versions is supposed to be 2.7.3.2.5.3.0-37 And two words why CSV classes are different. Phoenix is using shading for most of the external libraries, so classes may be modified when added to the final jar (to reflect shaded imports). But it should not affect the debug information.
... View more
01-27-2017
10:11 PM
That's definitely strange. Just tried a fresh install of 5.1.14 with sandbox 2.5 on my Windows 8 laptop and everything works just fine without problems. Have you tried to start it with other setting for network adapter (I would suggest to try "Not attached") to check whether it's really adapter settings problem or something else?
... View more
01-27-2017
08:56 PM
Check what you have 'NAT' adapter in the Network settings for the VM.
... View more
01-27-2017
08:31 PM
I'm not sure that I understand the changed you have done with hbase-site.xml. Could you please explain it a bit more. If you can provide the master log that may be useful to understand what exactly wrong on your side. One small note. From the sandbox perspective there is no difference between running it on pure Linux box or Linux VM on MacOS. It's isolated by docker container which has it's own IP which supposed to be resolved to sandbox.hortonworks.com FQDN and this name is used by all HDP services. It's not recommended to change this name nor change it to localhost until you want to do fresh install of HDP. Port forwarding is just a way to get access to this docker container from outside.
... View more
01-27-2017
08:07 PM
As Josh already mentioned, the best way to deploy HDP clusters is Ambari. Bigtop is a packaging project that has some ability to deploy unmanaged cluster for testing purposes. Ambari on the other hand is the cluster managing software. It may come that one day we will see Ambari as one of the packages in Bigtop, but even in this case the deployment of cluster will be done by Ambari blueprints.
... View more
01-23-2017
09:25 PM
If I correctly understand, you need to change :part_key to :key.
... View more
01-22-2017
07:24 PM
2 Kudos
try to turn on the java kerberos logging by adding -Dsun.security.krb5.debug=true to the HADOOP_OPTS. Usually it helps to understand what exactly fails during GSS initialization.
... View more
01-19-2017
05:20 AM
1 Kudo
I checked the script more carefully. It looks like adding export CLASSPATH=$CLASSPATH:/etc/hbase/conf to zeppelin-env.sh using Zeppelin configuration in Ambari would be enough.
... View more
01-18-2017
11:22 PM
4 Kudos
@Qi Wang Well, to fix it you will need to proceed few steps: 1. In the jdbc interpreters configuration you need to remove default artifacts for phoenix and hbase (phoenix-4.7... and hbase-1.1.3...). 2. add new artifact for phoenix-client.jar (just provide the path /usr/hdp/current/phoenix-client/phoenix-client.jar) 3. You need to set up interpreter to properly find hbase configuration dir. It could be done by adding to the end of zeppelin-env.sh export CLASSPATH=$CLASSPATH:/etc/hbase/conf using Ambari and after restart of Zeppelin service everything is supposed to work
... View more
01-18-2017
09:05 PM
I bet that HBase is healthy. Was able to reproduce it locally. The same exception running test from zeppelin with perfectly working sqlline (as well as hbase shell).
... View more
01-04-2017
06:33 PM
@Bala Vignesh N V what do you mean by 'Fixed width file' ? Can you give a more detailed example?
... View more
12-23-2016
01:12 AM
Could you please explain more why distcp hftp -> hdfs didn't work? This is the recommended way to copy between different versions of hadoop. And it should be running on destination cluster.
... View more
12-23-2016
12:53 AM
2 Kudos
@Karan Alang Here is the documentation from javac classpath: Classpath entries that are neither directories nor archives (.zip or .jar files) nor * are ignored. So, you should use just a simple '*' instead of '*.jar'. I believe hbase-client/lib/* would be enough. You also may use --verbose to track what javac is doing.
... View more
12-21-2016
12:02 AM
1 Kudo
I would also add to Ted's comment that netty-3.2.4 (which is required by hadoop-hdfs) and netty-all-4.0.23 are not conflicting because 3.2.4 is the old version with org.jboss.netty package while 4.0.23 has io.netty package. Actually they even have different artifact IDs (netty vs netty-all). So it's safe to use both of them in the same project.
... View more
12-16-2016
10:25 PM
2 Kudos
You need to scan the whole table to get all columns. Each row in HBase can have its own schema.
... View more
12-16-2016
07:01 PM
you need to use the same command grant 'sami', 'RWXCA' but you need to run hbase shell with hbase kerberos ticket. Ranger is centralized platform to manage security on your cluster at one place.
... View more
12-16-2016
05:02 PM
2 Kudos
Since you have enabled kerberos, you need to grant permissions in HBase. Use hbase shell with grant command for that.
... View more
12-15-2016
10:06 PM
2 Kudos
This is just an information message during the connection. If your application doesn't work, check that zookeeper as well as HBase are started.
... View more
11-16-2016
10:11 AM
check this topic how to use doAs: https://community.hortonworks.com/questions/46500/spark-cant-connect-to-hbase-using-kerberos-in-clus.html
... View more
11-10-2016
10:56 AM
1 Kudo
No. During the start of the MR job you may see message like: mapreduce.MultiHfileOutputFormat: Configuring 20 reduce partitions to match current region count That's exactly the number of reducers that will be created. How many of them will be running in parallel depends on the MR engine configuration.
... View more
11-10-2016
10:21 AM
MR job creates 1 reducer per region. So if you loading data to an empty table you may presplit table from HBase shell or use salting during table creation.
... View more
11-08-2016
12:47 PM
Can you run ls -la /usr/hdp/2.5.0.0-1245/hbase/lib/*.jar and attach output here?
... View more
11-08-2016
07:53 AM
I would suggest to return back the original protobuf (there was a thread about a similar issue somewhere on the internet, but this issue is not relevant to the protobuf). And check that no new jars were added to the HBase lib directory (like phoenix-client for example).
... View more
11-08-2016
05:44 AM
1 Kudo
Did you make any changes in the hbase classpath or any new jars added to the lib directory? It seems that you have wrong jline there.
... View more
11-07-2016
06:37 PM
Phoenix supports the only way to define table with schema by using dot between. Colons are using internally by HBase and Phoenix does this transformation on its own. As I mentioned earlier Phoenix has a bug that both schema and table name should have double quotes, otherwise table created incorrectly with empty schema. Moreover on the HBase layer this cause ambiguous physical table mapping like you are seeing in your example. So, my recommendation is to use quotes for schema and table name separately or just do not use quotes. And you are right, the feature was backported from 4.8.
... View more
11-04-2016
05:08 AM
Try String tableStr ="ACME.ENDPOINT_STATUS"; No double quotes and dot between schema name and table name. Hope that will work. One more note: if you create table using double quotes, put schema and table name in quotes. Like create table "ACME"."ENDPOINT_STATUS" in this case you will need to quote table name like you do in your sample. It seems that there is a bug in phoenix code. If you quote them together, Phoenix creates a table with empty schema and the table name as "schema.table". You may check it by querying SYSTEM.CATALOG table.
... View more
11-03-2016
09:31 PM
Looks like you have enabled namespace mapping. Check that the hbase conf dir is in the classpath. And you need to use ACME.xxxxx format for table names
... View more
09-15-2016
09:42 PM
You need to be sure that phoenix client has hbase-site.xml in the classpath. You may do this by setting HBASE_CONF_DIR environment variable.
... View more
09-02-2016
06:10 PM
1 Kudo
Yes, it updates indexes. If we are talking about MR job, than bulk load generates HFiles for user table and index tables at the same time and load it using HBase bulk load. In case of PSQL the data is loading using regular upserts and indexes are updated in regular way.
... View more
- « Previous
- Next »