Member since
11-04-2019
26
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1037 | 11-25-2019 12:30 AM |
04-13-2020
12:46 AM
hi @elkrish , Was this resolved ?? can u share if you found a solution for this issue ??
... View more
04-11-2020
12:12 PM
Hi @akv31 Point no 1 : Did u create a new java security file with new properties you want to change and place it in all nodes in a local directory or hdfs directory ??
... View more
02-08-2020
09:41 PM
@paras Thank you for your inputs. It is MSSQL database I am trying to fetch data from.( not mysql). Upgrading production database would be a difficult option at the moment. Is there a way in which I can change/override the disabled algorithm property in java security file run time in sqoop command. ?? so that i dont have to change the security file property on each node. Thanks
... View more
02-05-2020
09:44 PM
@paras Thank you .. As a work around , I commented out the disabled algorithms property in "java.security" file and tried the sqoop command and that worked. Just wanted to understand how does this work with sqoop when there is a change made in java.security file ? The above is a temporary solution and I am looking for a solution where I can change the java security file property in run time in the sqoop import command. This is because we cannot change the "java security" file on each node and affect other systems. Please share your inputs. Thanks in advance
... View more
02-05-2020
02:43 AM
Hi,
I am trying to connect to MSSQL database.
The custom jar used is sqljdbc4.jar.
Using sql server authentication with username and pwd.
versions : Sqoop version: 1.4.6-cdh5.14.2
The MSSQL database I am trying to connect is SSL enabled and I am getting the below error while using the sqoop command
The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Server chose TLSv1, but that protocol version is not enabled or not supported by the client.".
Is there a solution for this ?
... View more
- Tags:
- sql-server
- Sqoop
Labels:
- Labels:
-
Apache Sqoop
-
Security
01-05-2020
09:50 PM
@Shelton Thank you for your inputs. The above shows serialize-deserializing of a file that contains images. The one I am trying is to import a column in oracle table to HDFS using sqoop. Other documentations shows mapping the column to string using --map-column-java as the solution. But the same throws error: Caused by: java.sql.SQLException: Invalid column type: getString/getNString not implemented for class oracle.jdbc.driver.T4CBlobAccessor
... View more
01-05-2020
04:46 AM
@Shelton could you please repost the solution ?? I am facing similar issue. separate thread for the same created https://community.cloudera.com/t5/Support-Questions/sqoop-import-of-BLOB-columns-from-oracle-database/m-p/286761#M212633 Thanks in advance
... View more
01-04-2020
11:33 PM
@pavan_kumar I tried with --map-column-java column_name=binary /Binary Below is the errors that i get : 20/01/05 10:04:58 ERROR tool.ImportTool: Import failed: No ResultSet method for Java type binary 20/01/05 10:04:07 ERROR tool.ImportTool: Import failed: No ResultSet method for Java type Binary i tried map-column-hive column_name=binary The sqoop import was success but when i query the table in impala it shows the below error AnalysisException: Unsupported type 'BINARY' in Table All other columns are accessible but the blob one is corrupted and hence unable to query
... View more
12-31-2019
03:34 AM
I have to import table from oracle that contains both blob and clob
--map-column-java CLOB_column=String works fine
--map-column-java BLOB_column=String throws error
How do we import blob data type columns from oracle using sqoop ?
Query used:
sqoop import --connect 'jdbc:oracle:thin:@serveri_ip:port/database' --username <> -password <> --query "SELECT * FROM table_name WHERE \$CONDITIONS" --map-column-java CMID=Integer,DATAPROP=String --as-parquetfile --delete-target-dir --target-dir /user/test/ --m 1
DATAPROP --> is the BLOB column
... View more
Labels:
- Labels:
-
Apache Sqoop
11-27-2019
04:29 AM
@EricL I tried -map-column-java Settings=String and got the data type as string in hive. however the data doesnt look corrupted but the hexadecimal representation of the value is not retained cmd used : sqoop import --driver 'com.microsoft.sqlserver.jdbc.SQLServerDriver' --connect <> --connection-manager 'org.apache.sqoop.manager.SQLServerManager' --username <> -password <> --table 'my_table' --as-parquetfile --delete-target-dir --target-dir /user/test/ -map-column-java Settings=String --m 1 Result of above query: column data type in source : image column value in source : 0xFFffDeDJBF.......dDf. --> ( A hexadecimal value ) column data type in hive: string column value in hive: FFffDeDJBF.......dDf (0x is not retained)
... View more
11-27-2019
04:20 AM
@EricL I tried the above. the map-column-hive is not making any impact. the column data type is taken as string in hive after import and the data is still corrupted. Using hive-import and hive-table with it helped in getting the data type as binary in hive. But the data is still corrupted cmd used : sqoop import --driver 'com.microsoft.sqlserver.jdbc.SQLServerDriver' --connect <> --connection-manager 'org.apache.sqoop.manager.SQLServerManager' --username <> -password <> --hive-import --hive-table testc --as-parquetfile --delete-target-dir --target-dir '/user/test/' --query "select * FROM my_table where \$CONDITIONS; " --m 1 --map-column-hive Settings=binary
... View more
11-26-2019
06:03 AM
@VidyaSargur Any sqoop expert who can throw some light on this issue ?
... View more
11-25-2019
04:08 AM
@pmohan Does this help with columns having data type - image ?? converting the image data type to binary again is leading to data corruption. Please see the issue highlighted below : https://community.cloudera.com/t5/Support-Questions/SQOOP-import-of-quot-image-quot-data-type-into-hive/td-p/283584
... View more
11-25-2019
12:30 AM
1 Kudo
using the query option helped. solution : sqoop import --connect <> --username <> -password <> --as-parquetfile --delete-target-dir --target-dir /hdfs/dir/ --query "select * from [DB_SCHEMA_NAME].[table_name] where \$CONDITIONS;" --m 1
... View more
11-21-2019
12:06 AM
I am trying to import a table from MSSQL which has few columns with data type "Image"
The value in the columns are of binary format(LOBS) .
Ex
database type : MSSQL
column name : Settings
Data_type : image
value of the column:
( 0x1F8B0800000400EDBD07601C499625262FD7E0CC188CDE692EC1D69472329AB2A81CA6501320CCE74A10880604010ED9D....)
When I import using sqoop the COLUMNS is automatically taken as string in hive but the data is corrupted.
COMMAND USED :
sqoop import --driver 'com.microsoft.sqlserver.jdbc.SQLServerDriver' --connect 'jdbc:sqlserver://IP:PORT;database=DB;' --connection-manager 'org.apache.sqoop.manager.SQLServerManager' --username <> -password <> --as-parquetfile --delete-target-dir --target-dir '/user/test/' --query "select GUID,Name,cast(Settings AS binary) AS Settings FROM my_table_name where \$CONDITIONS; " --m 1
... View more
Labels:
- Labels:
-
Apache Sqoop
11-10-2019
11:13 PM
@Khanna command used : sqoop import --driver "com.microsoft.sqlserver.jdbc.SQLServerDriver" --connect "jdbc:sqlserver://server:port;database=db_name;" --connection-manager "org.apache.sqoop.manager.SQLServerManager" --username <> -password <> --table 'table_name' --as-parquetfile --delete-target-dir --target-dir /user/test/axe/ --temporary-rootdir /user/test2/ --m 4 --split-by user_id before mapreduce starts: hadoop fs -ls -R /user/test/ /user/test/axe /user/test/axe/.metadata /user/test/axe/.metadata/descriptor.properties /user/test/axe/.metadata/schema.avsc /user/test/axe/.metadata/schemas /user/test/axe/.metadata/schemas/1.avsc when the mapreduce job starts: hadoop fs -ls -R /user/test/ /user/test/.temp /user/test/.temp/job_1571067970221_0156 /user/test/.temp/job_1571067970221_0156/mr /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156 /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156/.metadata /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156/.metadata/descriptor.properties /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156/.metadata/schema.avsc /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156/.metadata/schemas /user/test/.temp/job_1571067970221_0156/mr/job_1571067970221_0156/.metadata/schemas/1.avsc /user/test/axe /user/test/axe/.metadata /user/test/axe/.metadata/descriptor.properties /user/test/axe/.metadata/schema.avsc /user/test/axe/.metadata/schemas /user/test/axe/.metadata/schemas/1.avsc once the import is complete: hadoop fs -ls -R /user/test/ /user/test/axe /user/test/axe/.metadata /user/test/axe/.metadata/descriptor.properties /user/test/axe/.metadata/schema.avsc /user/test/axe/.metadata/schemas /user/test/axe/.metadata/schemas/1.avsc /user/test/axe/.signals /user/test/axe/.signals/unbounded /user/test/axe/679dadfc-3657-410b-a805-6c98b8d1720b.parquet /user/test/axe/923bbf06-35d6-4156-8df3-a6e53ebf00f7.parquet /user/test/axe/aef24eac-15dd-4ebc-a61a-68265d53a320.parquet /user/test/axe/f903a079-87b3-48a0-bea6-aa02d92c5aac.parquet
... View more
11-10-2019
10:53 PM
Sqoop import fails while importing tables that has space and special characters in the table name.
command used:
sqoop import --connect <> --username <> --password <> --table "employee details information" --as-parquetfile --delete-target-dir --target-dir /hdfs/dir/ --m 1
... View more
Labels:
- Labels:
-
Apache Sqoop
11-10-2019
10:51 PM
@Khanna I tried it. but the .temp is still getting created at the target directory that i specify.
... View more
11-05-2019
10:23 PM
@Shelton The target-dir is specified so that the generated files are placed in that directory. The problem I am facing here is with the temporary directory (.temp) that gets created during run time (i.e when the mapreduce job is initiated) at the target -dir. The solution I am looking for is to change the temporary directory(.temp) location.
... View more
11-04-2019
10:42 PM
hi Abhilash, I am facing same issue while running concurrent sessions at the same point. Did u find a solution to change the temp directory ?
... View more
11-04-2019
09:47 PM
Hi @Shelton I am not using hive-import or trying to create hive table. The issue here is that a .temp directory that gets created at the target-dir i specify. The .temp will be deleted once the execution is complete but my use case is to change the directory where the .temp is getting created during run time.
... View more
11-04-2019
05:13 AM
A ".temp" directory is getting created in the target directory while importing data using sqoop. I want to change the directory location where the .temp/job_1569164528775_11264/mr/* directories are getting created.
command:
sqoop import -D yarn.app.mapreduce.am.staging-dir=/user/test/ --driver "com.microsoft.sqlserver.jdbc.SQLServerDriver" --connect "jdbc:sqlserver://ip:port;database=database;" --connection-manager "org.apache.sqoop.manager.SQLServerManager" --username <username> -password <password> --table 'tablename' --as-parquetfile --delete-target-dir --target-dir /user/database/test/ --m 1
result now:
hadoop fs -ls -R /user/database/test/
/user/database/test/.temp
/user/database/test/.temp/job_1569164528775_11264/mr/.....*
Expected result:
hadoop fs -ls -R /user/test/
/user/test/.temp/job_1569164528775_11264/mr/.....*
tried the below aswell:
mapred.system.dir mapreduce.task.output.dir mapreduce.task.tmp.dir mapred.work.output.dir mapreduce.cluster.temp.dir mapreduce.cluster.local.dir mapred.temp.dir mapreduce.jobtracker.system.dir hadoop.tmp.dir
... View more
Labels:
- Labels:
-
Apache Sqoop