Support Questions

Find answers, ask questions, and share your expertise

CDH6.32. SQOOP can't connect the ZooKeeper

avatar
Explorer

hey,
when I run a sqoop program.

 

sqoop import "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" --driver org.apache.phoenix.jdbc.PhoenixDriver --connect jdbc:phoenix:172.18.1.16:2181 --query "select sid from test.PV WHERE 1=1 and \$CONDITIONS limit 10" --hive-import --hive-database tmp --hive-table system_stats  --target-dir /hivetable/phoenixdb --delete-target-dir --split-by sid --hive-overwrite --null-string '\\N' --null-non-string '\\N'

 

it's out this error
 

 

023-01-06 16:53:15,737 INFO [ReadOnlyZKClient-172.18.1.16:2181@0x062577d6] org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/yarn/nm/usercache/root/appcache/application_1672990653606_0007/container_e14_1672990653606_0007_01_000002
2023-01-06 16:53:15,738 INFO [ReadOnlyZKClient-172.18.1.16:2181@0x062577d6] org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=172.18.1.16:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/1413879487@425c8e15
2023-01-06 16:53:15,755 INFO [ReadOnlyZKClient-172.18.1.16:2181@0x062577d6-SendThread(slave3.sf.fulin.com:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server slave3.sf.fulin.com/172.18.1.16:2181. Will not attempt to authenticate using SASL (unknown error)
2023-01-06 16:53:15,756 INFO [ReadOnlyZKClient-172.18.1.16:2181@0x062577d6-SendThread(slave3.sf.fulin.com:2181)] org.apache.zookeeper.ClientCnxn: Socket connection established, initiating session, client: /172.18.1.4:42814, server: slave3.sf.fulin.com/172.18.1.16:2181
2023-01-06 16:53:15,761 INFO [ReadOnlyZKClient-172.18.1.16:2181@0x062577d6-SendThread(slave3.sf.fulin.com:2181)] org.apache.zookeeper.ClientCnxn: Session establishment complete on server slave3.sf.fulin.com/172.18.1.16:2181, sessionid = 0xff858602775200c5, negotiated timeout = 90000
2023-01-06 16:53:15,905 INFO [main] org.apache.phoenix.query.ConnectionQueryServicesImpl: HConnection established. Stacktrace for informational purposes: hconnection-0x3f4f9acd java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.phoenix.util.LogUtil.getCallerStackTrace(LogUtil.java:55)
org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:433)
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:273)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2557)
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2533)
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2533)
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
java.sql.DriverManager.getConnection(DriverManager.java:664)
java.sql.DriverManager.getConnection(DriverManager.java:270)
org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:298)
org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:165)
org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:158)
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
java.security.AccessController.doPrivileged(Native Method)
javax.security.auth.Subject.doAs(Subject.java:422)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

2023-01-06 16:53:20,825 INFO [main] org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, tries=6, retries=16, started=4154 ms ago, cancelled=false, msg=Meta znode is null, details=row 'SYSTEM:CATALOG' on table 'hbase:meta' at null, see https://s.apache.org/timeout
2023-01-06 16:53:24,834 INFO [main] org.apache.hadoop.hbase.client.RpcRetryingCallerImpl: Call exception, tries=7, retries=16, started=8163 ms ago, cancelled=false, msg=Meta znode is null, details=row 'SYSTEM:CATALOG' on table 'hbase:meta' at null, see https://s.apache.org/timeout

 

I don't know how to resolve it ,please help!

2 ACCEPTED SOLUTIONS

avatar
Super Collaborator

@Love_Cat Are you setting this up for the 1st time or was it working fine earlier?

 

As per the log, it seems the job is not able to locate the hbase meta location. Do you have Hbase Gateway role added to the node on which the sqoop job is running?

View solution in original post

avatar
Explorer

截屏2023-01-12 上午9.45.46.png

myhbase gateway is not work

View solution in original post

5 REPLIES 5

avatar
Explorer

holy **bleep**! Which hero can help ?

avatar
Explorer

who can help me put ,I pay 200 USA money

avatar
Super Collaborator

@Love_Cat Are you setting this up for the 1st time or was it working fine earlier?

 

As per the log, it seems the job is not able to locate the hbase meta location. Do you have Hbase Gateway role added to the node on which the sqoop job is running?

avatar
Explorer
[root@slave3 conf]# more sqoop-env.sh 
#!/usr/bin/env bash
##
# Generated by Cloudera Manager and should not be modified directly
##

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop

#set the path to where bin/hbase is available
export HBASE_HOME=/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hbase

#Set the path to where bin/hive is available
export HIVE_HOME=/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hive

hey!bro,very happy to see you! I have set those in this picture as you just saying, but it do not work,
my env is CDH6.3.2,it's first time to run Sqoop phoenix-hive

avatar
Explorer

截屏2023-01-12 上午9.45.46.png

myhbase gateway is not work