- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Error when i try to import a table from postgres to hive
- Labels:
-
Apache Hive
-
Apache Sqoop
Created on ‎11-06-2017 04:07 PM - edited ‎08-18-2019 01:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When i try to execute a shell job with sqoop or sqoop job in workflow manager i have a error:
I am try to import tables from postgres to hive
2017-11-06 13:58:46,622 WARN ShellActionExecutor:523 - SERVER[ambari.4linux.com.br] USER[admin] GROUP[-] TOKEN[] APP[Fluxo_BigDatinha] JOB[0000003-171106133302301-oozie-oozi-W] ACTION[0000003-171106133302301-oozie-oozi-W@Verifica_Sujeira_HDFS] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
My Shell Script:
sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxxx --table tb_con_contatos --target-dir /user/admin/teste2
My Cluster Configuration:
I am running in /user/admin/sqoop and inside this directory have a paste with lib:
Created ‎11-06-2017 04:17 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are not much logging in the provided WARN message to identify the root cause of the issue.
But few things we can check like the Postgres JDBC Driver is jar installed in Oozie sharelib or not? And Sqoop client is installed on all the nodes properly.
Created ‎11-06-2017 06:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My "job.properties" hasn't included the lib, i put the lib and the job failed.
😞
Created ‎11-06-2017 06:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I create a sqoop job in workflow manager and works, now i try to execute this command and the job failed, you can help me ?
sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxx --table tb_con_contatos --hive-database cr4prd --hive-import -m1
Created ‎11-06-2017 06:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxx --table tb_con_contatos --hive-database cr4prd --hive-import -m1
If your above Standalone Sqoop command is failing with the missing driver error then you will need to place your postgresql jdbc driver inside the "/usr/hdp/current/sqoop-client/lib/"
.
Created ‎11-06-2017 06:38 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I just tried the following command at my end and it seems to have worked with the following command set.
# cp /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar /usr/hdp/current/sqoop-client/lib/
.
# su -l sqoop -c "sqoop import --connect jdbc:postgresql://amb25101.example.com:5432/ambari --username ambari --password bigdata --table hosts --hive-database default --hive-table ambari_hosts_hive --hive-import -m 1 --target-dir /apps/hive/warehouse/ambari_hosts_hive.db/ambari_hosts_hive"
.
Is your Postgressql Jar correct? I mean is it corrupted by any chance?
# grep 'org.postgresql.Driver' /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar Binary file /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar matche
.
Created on ‎11-06-2017 06:49 PM - edited ‎08-18-2019 01:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
All commands works fine, but when i try to execute a sqoop job in workflow manager i get a error]
My Job Configuration:
Created ‎11-06-2017 08:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I find this error in log
2017-11-06 18:09:51,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 2017-11-06 18:09:51,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1509998820430_0003Job Transitioned from COMMITTING to SUCCEEDED 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: RMCommunicator notified that shouldUnregistered is: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services 2017-11-06 18:09:51,810 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0 2017-11-06 18:09:52,093 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://ambari.4linux.com.br:8020/user/admin/.staging/job_1509998820430_0003/job_1509998820430_0003_1.jhist to hdfs://ambari.4linux.com.br:8020/mr-history/tmp/admin/job_1509998820430_0003-1509998963150-admin-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DFluxo_BigDatinh-1509998991790-1-0-SUCCEEDED-default-1509998978047.jhist_tmp 2017-11-06 18:09:52,227 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2017-11-06 18:09:52,235 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1509998820430_0003_01_000002 2017-11-06 18:09:52,235 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2017-11-06 18:09:52,237 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1509998820430_0003_m_000000_0: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
Created ‎11-06-2017 06:29 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In my path "/usr/hdp/current/sqoop-client/lib/" i hasn't the driver postgres, i put inside the path and run the job but the same failed.
Created ‎11-06-2017 11:39 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your help.
I solved this problem as follows:
- Create a sqoop job for extract data and load in HDFS
- Using a hive job for execute a "load data inpath" for load data inside hive

- « Previous
-
- 1
- 2
- Next »