Created 06-22-2016 05:59 PM
Hi:
iam trying insert from phoenix into hbase table and i have this error, i Know the error is like this, but i dont want to modify the jar phoenix-4.4.0.2.4.0.0-169-client.jar, so anyone know how to get the new jar:
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.8.1.el6.x86_64 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hdfs 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/hdp/2.4.0.0-169/phoenix 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=xxxxxxxx:2181,xxxxxxxx:2181,xxxxxxxxx:2181 sessionTimeout=90000 watcher=hconnection-0x35dc6a890x0, quorum=xxxxxxxx:2181,xxxxxxxx:2181,xxxxxxxxx:2181, baseZNode=/hbase 16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Opening socket connection to server xxxxxxx/xxxxxxxx:2181. Will not attempt to authenticate using SASL (unknown error) 16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Socket connection established to xxxxxxxxx/10.1.246.20:2181, initiating session 16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Session establishment complete on server xxxxxxxx/10.1.246.20:2181, sessionid = 0x354b76f3879008b, negotiated timeout = 40000 16/06/22 19:44:36 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 16/06/22 19:44:36 INFO metrics.Metrics: Initializing metrics system: phoenix 16/06/22 19:44:36 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 16/06/22 19:44:36 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s). 16/06/22 19:44:36 INFO impl.MetricsSystemImpl: phoenix metrics system started 16/06/22 19:44:37 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ed32db7 connecting to ZooKeeper ensemble=xxxxxxx:2181,xxxxxxx:2181,xxxxxxxx:2181 16/06/22 19:44:37 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 16/06/22 19:44:37 ERROR client.ConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Created 06-22-2016 06:10 PM
You don't need to modify client jar. As @Ted Yu mentioned one of the reasons may be missing hbase-site.xml in the classpath. Also you may provid the proper zk parent in the connection URL:
jdbc:phoenix:quorum:2181:/hbase-unsecure
It would be nice if you give a bit more information how you connect the client.
Created 06-22-2016 06:01 PM
As the last ERROR showed you, the effective hbase-site.xml seems to not be on the classpath.
The zookeeper ensemble was redacted. Please make sure the quorum is the one used by hbase.
Created 06-22-2016 06:10 PM
You don't need to modify client jar. As @Ted Yu mentioned one of the reasons may be missing hbase-site.xml in the classpath. Also you may provid the proper zk parent in the connection URL:
jdbc:phoenix:quorum:2181:/hbase-unsecure
It would be nice if you give a bit more information how you connect the client.
Created 06-22-2016 06:43 PM
hi, many thanks now is working using quorum:2181:/hbase-unsecure like that:
hadoop jar phoenix-4.4.0.2.4.0.0-169-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table journey_oficina_hbase --input /tmp/journey_oficina_hbase.csv -z lnxbig04.cajarural.gcr:2181:/hbase-unsecure
but doesnt finish i am still waiting....
16/06/22 20:41:52 INFO mapreduce.Job: map 100% reduce 100% 16/06/22 20:41:52 INFO mapreduce.Job: Job job_1464163049638_1416 completed successfully 16/06/22 20:41:52 INFO mapreduce.Job: Counters: 50 16/06/22 20:41:52 INFO mapreduce.CsvBulkLoadTool: Loading HFiles from /tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE 16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: managed connection cannot be used for bulkload. Creating unmanaged connection. 16/06/22 20:41:53 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x311fac48 connecting to ZooKeeper ensemble=xxxxxxx:2181 16/06/22 20:41:53 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=xxxxxxx:2181 sessionTimeout=90000 watcher=hconnection-0x311fac480x0, quorum=
xxxxxxx
:2181, baseZNode=/hbase-unsecure 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Opening socket connection to server lnxbig04.cajarural.gcr/xxxxxxx
. Will not attempt to authenticate using SASL (unknown error) 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Socket connection established toxxxxxxx
/10.1.246.18:2181, initiating session 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Session establishment complete on serverxxxxxxx
/10.1.246.18:2181, sessionid = 0x154b76f380a00f0, negotiated timeout = 40000 16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/_SUCCESS 16/06/22 20:41:53 INFO hfile.CacheConfig: CacheConfig:disabled 16/06/22 20:41:52 INFO mapreduce.CsvBulkLoadTool: Loading HFiles from /tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE 16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: managed connection cannot be used for bulkload. Creating unmanaged connection. 16/06/22 20:41:53 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x311fac48 connecting to ZooKeeper ensemble=lnxbig04.cajarural.gcr:2181 16/06/22 20:41:53 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=lnxbig04.cajarural.gcr:2181 sessionTimeout=90000 watcher=hconnection-0x311fac480x0, quorum=lnxbig04.cajarural.gcr:2181, baseZNode=/hbase-unsecure 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Opening socket connection to server lnxbig04.cajarural.gcr/10.1.246.18:2181. Will not attempt to authenticate using SASL (unknown error) 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Socket connection established to lnxbig04.cajarural.gcr/10.1.246.18:2181, initiating session 16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Session establishment complete on server lnxbig04.cajarural.gcr/10.1.246.18:2181, sessionid = 0x154b76f380a00f0, negotiated timeout = 40000 16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/_SUCCESS 16/06/22 20:41:53 INFO hfile.CacheConfig: CacheConfig:disabled 16/06/22 20:41:53 INFO mapreduce.LoadIncrementalHFiles: Trying to load hfile=hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/0/bb208025d42b4b5fb14c3d8143e99878 first=00492016-06-01 18:00:00 last=F0132016-06-01 13:00:00any suggestions
Created 06-22-2016 07:05 PM
I would suggest to set HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf as the official documentation says. More details can be found at
https://phoenix.apache.org/bulk_dataload.html
CSV bulk load is using the regular HBase routine to load hfiles. So the actual hbase-site.xml is required in the classpath.
Created 06-22-2016 06:54 PM
Hi:
from the sqlline is working like this:
0: jdbc:phoenix:zzzzzzz> UPSERT INTO JOURNEY_OFICINA_HBASE VALUES('0198','2016-06-01 00:00:00','8002','DVI96COU',8);
but not from the jar
also, the table i created was like multi rowkey
CREATE TABLE IF NOT EXISTS journey_oficina_hbase( CODNRBEENF CHAR(4) not null, FECHAOPRCNF CHAR(21) not null , CODINTERNO CHAR(4), CODTXF CHAR(8), FREQ BIGINT, CONSTRAINT pk PRIMARY KEY (CODNRBEENF,FECHAOPRCNF) );
But now how can i get the row with this row key??? the 4 digist are de oficce and the rest is the date
01982016-06-01 00:00:00
thanks
Created 06-22-2016 07:17 PM
Could you please explain the use case you have? Do you plan to query this data using HBase or Phoenix? In Phoenix case you just use the regular sql statements via jdbc driver. For HBase you need to handle everything yourself. So, looking for a specific records you need to run something like get 'JOURNEY_OFICINA_HBASE', '01982016-06-01 00:00:00 '
You need those trailing whitespaces since you are using the fixed size types. So the whole length of the rowkey should be exactly 25 symbols.