Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Error when i try to import a table from postgres to hive

avatar

When i try to execute a shell job with sqoop or sqoop job in workflow manager i have a error:

I am try to import tables from postgres to hive

2017-11-06 13:58:46,622  WARN ShellActionExecutor:523 - SERVER[ambari.4linux.com.br] USER[admin] GROUP[-] TOKEN[] APP[Fluxo_BigDatinha] JOB[0000003-171106133302301-oozie-oozi-W] ACTION[0000003-171106133302301-oozie-oozi-W@Verifica_Sujeira_HDFS] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]

My Shell Script:

sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxxx --table tb_con_contatos --target-dir /user/admin/teste2

My Cluster Configuration:

42475-screenshot-2017-11-6-ambari-bigdatinha.png

42475-screenshot-2017-11-6-ambari-bigdatinha.png

I am running in /user/admin/sqoop and inside this directory have a paste with lib:

42476-screenshot-2017-11-6-ambari-bigdatinha1.png

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Guilherme Colla

There are not much logging in the provided WARN message to identify the root cause of the issue.

But few things we can check like the Postgres JDBC Driver is jar installed in Oozie sharelib or not? And Sqoop client is installed on all the nodes properly.

View solution in original post

17 REPLIES 17

avatar

@Jay Kumar SenSharma

My "job.properties" hasn't included the lib, i put the lib and the job failed.

😞

avatar

@Jay Kumar SenSharma

I create a sqoop job in workflow manager and works, now i try to execute this command and the job failed, you can help me ?

sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxx --table tb_con_contatos --hive-database cr4prd --hive-import -m1

avatar
Master Mentor

@Guilherme Colla

sqoop import --connect jdbc:postgresql://10.0.0.78:5433/cr4prd --username ambari --password xxxxxx --table tb_con_contatos --hive-database cr4prd --hive-import -m1

If your above Standalone Sqoop command is failing with the missing driver error then you will need to place your postgresql jdbc driver inside the "/usr/hdp/current/sqoop-client/lib/"

.

avatar
Master Mentor

@Guilherme Colla

I just tried the following command at my end and it seems to have worked with the following command set.

# cp /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar  /usr/hdp/current/sqoop-client/lib/

.

# su -l sqoop -c "sqoop  import --connect jdbc:postgresql://amb25101.example.com:5432/ambari --username ambari --password bigdata --table hosts  --hive-database default --hive-table ambari_hosts_hive --hive-import -m 1 --target-dir /apps/hive/warehouse/ambari_hosts_hive.db/ambari_hosts_hive"

.

Is your Postgressql Jar correct? I mean is it corrupted by any chance?

# grep 'org.postgresql.Driver' /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar

Binary file /usr/lib/ambari-server/postgresql-9.3-1101-jdbc4.jar matche

.

avatar

@Jay Kumar SenSharma

All commands works fine, but when i try to execute a sqoop job in workflow manager i get a error]

My Job Configuration:

42479-screenshot-2017-11-6-ambari-bigdatinha3.png

42480-screenshot-2017-11-6-ambari-bigdatinha4.png

42481-screenshot-2017-11-6-ambari-bigdatinha5.png

avatar

@Jay Kumar SenSharma

I find this error in log


2017-11-06 18:09:51,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 2017-11-06 18:09:51,792 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1509998820430_0003Job Transitioned from COMMITTING to SUCCEEDED 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: RMCommunicator notified that shouldUnregistered is: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true 2017-11-06 18:09:51,796 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services 2017-11-06 18:09:51,810 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0 2017-11-06 18:09:52,093 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://ambari.4linux.com.br:8020/user/admin/.staging/job_1509998820430_0003/job_1509998820430_0003_1.jhist to hdfs://ambari.4linux.com.br:8020/mr-history/tmp/admin/job_1509998820430_0003-1509998963150-admin-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DFluxo_BigDatinh-1509998991790-1-0-SUCCEEDED-default-1509998978047.jhist_tmp 2017-11-06 18:09:52,227 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2017-11-06 18:09:52,235 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e29_1509998820430_0003_01_000002 2017-11-06 18:09:52,235 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0 2017-11-06 18:09:52,237 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1509998820430_0003_m_000000_0: Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143

avatar

@Jay Kumar SenSharma

In my path "/usr/hdp/current/sqoop-client/lib/" i hasn't the driver postgres, i put inside the path and run the job but the same failed.

avatar
@Jay Kumar SenSharma

Thanks for your help.

I solved this problem as follows:

  • Create a sqoop job for extract data and load in HDFS
  • Using a hive job for execute a "load data inpath" for load data inside hive