Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

sqoop import hive table error

avatar
Expert Contributor

In HDP Sandbox 2.6.4 I imported mysql to hdfs but when I tried to import from mysql to hive with the following command:

[maria_dev@sandbox-hdp ~]$ sqoop import --connect jdbc:mysql://sandbox-hdp.hortonworks.com/azhadoop --username root --password hadoop --query 'select * from iris_mysql WHERE $CONDITIONS' --m 1 --hive-import --hive-table azhadoop.iris_hive --target-dir /tmp/hive_temp

But I got this error:

19/04/27 14:22:19 ERROR manager.SqlManager: Error reading from database: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@4b8ee4de is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@4b8ee4de is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Erkan ŞİRİN

Nice to know it worked just wondering didn't see --create-hive-table statement? can you try adding like below

--hive-table azhadoop.iris_hive \
--create-hive-table \
--target-dir /tmp/hive_temp

Please let me know

View solution in original post

10 REPLIES 10

avatar
Master Mentor

@Erkan ŞİRİN Can you try using

sqoop import --connect jdbc:mysql://sandbox-hdp.hortonworks.com/azhadoop --driver com.mysql.jdbc.Driver  --username root --password hadoop 



avatar
Expert Contributor

Hi @Geoffrey Shelton Okot thanks for your answer. But are you sure this is a driver problem? I think it works fine because I am able to import from mysql to hdfs.

avatar
Master Mentor

@Erkan ŞİRİN

Sorry, could get back much earlier ... It won't cost you to try so that we have that eliminated among the possible solutions

"Ensure that you have called .close() on any active streaming result sets before attempting more queries. " corresponds to that!

avatar
Expert Contributor

Hi @Geoffrey Shelton Okot thanks again. Interestingly adding --driver made the ERROR disappear. But another problem showed up

[root@sandbox-hdp ~]# sqoop import --connect jdbc:mysql://sandbox-hdp.hortonworks.com/azhadoop --driver com.mysql.jdbc.Driver --username root --password hadoop --query "select * from iris_mysql WHERE \$CONDITIONS" --m 1 --hive-import --hive-table azhadoop.iris_hive --target-dir /tmp/hive_temp
Warning: /usr/hdp/2.6.4.0-91/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/05/07 21:04:19 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.4.0-91
19/05/07 21:04:19 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
19/05/07 21:04:19 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
19/05/07 21:04:19 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
19/05/07 21:04:20 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
19/05/07 21:04:20 INFO manager.SqlManager: Using default fetchSize of 1000
19/05/07 21:04:20 INFO tool.CodeGenTool: Beginning code generation
19/05/07 21:04:20 INFO manager.SqlManager: Executing SQL statement: select * from iris_mysql WHERE  (1 = 0)
19/05/07 21:04:20 INFO manager.SqlManager: Executing SQL statement: select * from iris_mysql WHERE  (1 = 0)
19/05/07 21:04:20 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.4.0-91/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/3e81cb85d0e8a571138759f1babfc886/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/05/07 21:04:22 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/3e81cb85d0e8a571138759f1babfc886/QueryResult.jar
19/05/07 21:04:22 INFO mapreduce.ImportJobBase: Beginning query import.
19/05/07 21:04:23 INFO client.RMProxy: Connecting to ResourceManager at sandbox-hdp.hortonworks.com/172.17.0.2:8032
19/05/07 21:04:23 INFO client.AHSProxy: Connecting to Application History server at sandbox-hdp.hortonworks.com/172.17.0.2:10200
19/05/07 21:04:26 INFO db.DBInputFormat: Using read commited transaction isolation
19/05/07 21:04:26 INFO mapreduce.JobSubmitter: number of splits:1
19/05/07 21:04:27 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1557245169101_0001
19/05/07 21:04:27 INFO impl.YarnClientImpl: Submitted application application_1557245169101_0001
19/05/07 21:04:27 INFO mapreduce.Job: The url to track the job: http://sandbox-hdp.hortonworks.com:8088/proxy/application_1557245169101_0001/
19/05/07 21:04:27 INFO mapreduce.Job: Running job: job_1557245169101_0001
19/05/07 21:04:40 INFO mapreduce.Job: Job job_1557245169101_0001 running in uber mode : false
19/05/07 21:04:40 INFO mapreduce.Job:  map 0% reduce 0%

It doesn't move stuck with mapreduce job. No progress, job takes 2550 memory from YARN and status running. No error but no progress. How can anyone import a query from mysql to hive in a sandbox?

avatar
Master Mentor

@Erkan ŞİRİN

Can you share your mapred-site.xml ?

avatar
Expert Contributor

avatar
Master Mentor

@Erkan ŞİRİN

Can you add these values in your mapred.xml get the values of mapreduce.job.ubertask.maxbytes from hdfs-site.xml

mapreduce.job.ubertask.enable = true
mapreduce.job.ubertask.maxmaps = 1
mapreduce.job.ubertask.maxreduces = 1
mapreduce.job.ubertask.maxbytes = {get value from hdfs dfs.block.size parameter in hdfs-site.xm

Then restart the YARN and MR and relaunch the job

avatar
Expert Contributor

thank you very much @Geoffrey Shelton Okot. The job worked. One last thing: I can't see any table in hive azhadoop.

my query:

sqoop import --connect jdbc:mysql://sandbox-hdp.hortonworks.com/azhadoop --driver com.mysql.jdbc.Driver --username root --password hadoop --query "select * from iris_mysql WHERE \$CONDITIONS" --m 1 --hive-import --hive-table azhadoop.iris_hive --target-dir /tmp/hive_temp

108601-hive-azhadoop-dbeaver-screenshot.png

the result of mr job

19/05/08 21:33:10 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=172694
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=87
                HDFS: Number of bytes written=4574
                HDFS: Number of read operations=4
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Other local map tasks=1
                Total time spent by all maps in occupied slots (ms)=26964
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=3852
                Total vcore-milliseconds taken by all map tasks=3852
                Total megabyte-milliseconds taken by all map tasks=5916672
        Map-Reduce Framework
                Map input records=151
                Map output records=151
                Input split bytes=87
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=135
                CPU time spent (ms)=1310
                Physical memory (bytes) snapshot=241512448
                Virtual memory (bytes) snapshot=3256225792
                Total committed heap usage (bytes)=152567808
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=4574
19/05/08 21:33:10 INFO mapreduce.ImportJobBase: Transferred 4.4668 KB in 26.0204 seconds (175.7852 bytes/sec)
19/05/08 21:33:10 INFO mapreduce.ImportJobBase: Retrieved 151 records.

avatar
Master Mentor

@Erkan ŞİRİN

Nice to know it worked just wondering didn't see --create-hive-table statement? can you try adding like below

--hive-table azhadoop.iris_hive \
--create-hive-table \
--target-dir /tmp/hive_temp

Please let me know