Support Questions
Find answers, ask questions, and share your expertise

Loading csv file into phoenix table

I used the following :

[ahoussem@namenode ~]$ hadoop jar /usr/hdp/2.5.3.0-37/phoenix/phoenix-4.7.0.2.5.3.0-37-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table c_values --input /user/houssem/c_values.csv

and I got this :

17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.5.3.0-37/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.3.0-37/hadoop/lib/native
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-642.13.1.el6.x86_64
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:user.name=ahoussem
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/ahoussem
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/ahoussem
17/03/07 16:38:49 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@247e2ef3
17/03/07 16:38:49 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
17/03/07 16:38:49 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
17/03/07 16:38:49 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x25aa85b5762013f, negotiated timeout = 40000
17/03/07 16:38:49 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
17/03/07 16:38:49 INFO metrics.Metrics: Initializing metrics system: phoenix
17/03/07 16:38:49 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
17/03/07 16:38:49 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
17/03/07 16:38:49 INFO impl.MetricsSystemImpl: phoenix metrics system started
Exception in thread "main" java.sql.SQLException: java.lang.RuntimeException: java.lang.NullPointerException
	at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2590)
	at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
	at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
	at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2327)
	at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
	at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
	at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
	at java.sql.DriverManager.getConnection(DriverManager.java:664)
	at java.sql.DriverManager.getConnection(DriverManager.java:208)
	at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:305)
	at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:296)
	at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209)
	at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:101)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:208)
	at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
	at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
	at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
	at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
	at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
	at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
	at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
	at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:405)
	at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2358)
	... 21 more
Caused by: java.lang.NullPointerException
	at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:395)
	at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:562)
	at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
	at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
	at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
	at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
	... 30 more


1 ACCEPTED SOLUTION

Accepted Solutions

Re: Loading csv file into phoenix table

Phoenix is unable to find the location of the hbase:meta region. This is likely caused by one of two reasons:

First, make sure that HBase is actually running and healthy.

Second, it may be that Phoenix is not looking at the correct place in ZK to find your HBase instance. You can try to use the "-z" option to specify the ZK quorum for your environment (e.g. localhost:2181:/hbase-unsecure).

View solution in original post

1 REPLY 1

Re: Loading csv file into phoenix table

Phoenix is unable to find the location of the hbase:meta region. This is likely caused by one of two reasons:

First, make sure that HBase is actually running and healthy.

Second, it may be that Phoenix is not looking at the correct place in ZK to find your HBase instance. You can try to use the "-z" option to specify the ZK quorum for your environment (e.g. localhost:2181:/hbase-unsecure).

View solution in original post