Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Sqoop job is being KILLED when executed though hue

Re: Sqoop job is being KILLED when executed though hue

Guru
hmmm, I suspect it is the space between Reference and ID, Oozie breaks it up into different parameters and ignoring the quotes you have.

Can you try to use another field that does not have space in it?

Cheers
Eric

Re: Sqoop job is being KILLED when executed though hue

Explorer

ok ... this is another try on a different table:

<sqoop xmlns="uri:oozie:sqoop-action:0.2">
  <job-tracker>masternode:8032</job-tracker>
  <name-node>hdfs://NameServiceOne</name-node>
  <command>import \
--connect 'jdbc:sqlserver://11.11.11.11;database=SQL_Training' \
--username SQL_Training_user --password SQL_Training_user \
--table BigDataTest -m 1  --check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db_atlas_jrtf.db/BigDataTest \
--hive-table BigDataTest \
--map-column-hive lastmodified=timestamp \
--fields-terminated-by '\001'  --fields-terminated-by '\n'</command>
  <configuration />
</sqoop>

but same error!

 

Re: Sqoop job is being KILLED when executed though hue

Guru
@anis447

Can you please share the content of stderr.log for the launcher? I have tested in my 6.2 cluster, my job failed with below error in stderr.log:

java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver

And I see the same message as yours in stdout.log:

Fetching child yarn jobs
tag id : oozie-b1d7e6d15cf45e6c89a06ad6c89e7109
No child applications found

No child hadoop job is executed.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410)
at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Caused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57)
at java.lang.Runtime.exit(Runtime.java:107)
at java.lang.System.exit(System.java:971)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:214)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:199)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:104)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:51)
... 16 more

Looks like your Sqoop job failed and not even reach the step to create Sqoop YARN jobs.

Cheers
Eric

Re: Sqoop job is being KILLED when executed though hue

Guru
i am also getting the same error after fixing the JDBC driver issue, in Oozie server log, I noticed below:

2019-06-20 22:51:19,640 WARN org.apache.oozie.action.hadoop.HadoopTokenHelper: SERVER[xxxx-xxx.cloudera.com] USER[admin] GROUP[-] TOKEN[] APP[My Workflow] JOB[0000009-190620171825845-oozie-oozi-W] ACTION[0000009-190620171825845-oozie-oozi-W@sqoop-6891] An error happened while trying to get server principal. Getting it from service principal anyway.
java.lang.IllegalArgumentException: Does not contain a valid host:port authority: yarnRM

Can you check if you get the same error?

I am still researching...

Cheers
Eric
Highlighted

Re: Sqoop job is being KILLED when executed though hue

Explorer

I am also searching for it on the internet. I've found a couple of people who are having the same, trying to follow there solution but unfortunately didn't work ... one of them is:

 

http://morecoder.com/article/1097655.html

 

I have tried many things and changed a couple of configurations, so I am not sure if we are on the same page or no... anyways this is the stderr I am getting now:

            Log Type: stderr
          
            Log Upload Time: Sat Jun 22 11:57:38 +0400 2019
          
            Log Length: 937
          SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/159/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/23/3.0.0-cdh6.2.0-mr-framework.tar.gz/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.

Try --help for usage instructions.

Re: Sqoop job is being KILLED when executed though hue

Expert Contributor

any soulution for this issue ?   CDH 6.2.0 hue so many little bugs, why ?

 

sqoop action in editor or oozie could not execute correctly. 

 

even i can't sumit sqoop job in oozie . pls have  a detail like below:

 

[06/Jul/2019 19:51:44 +0800] resource     DEBUG    GET /admin/configuration Got response in 16ms: {"oozie.email.smtp.auth":"false","oozie.service.ELService.functions.coord-job-submit-data":"\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,\n            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,\n            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,\n            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coor...
[06/Jul/2019 19:51:44 +0800] exceptions_renderable ERROR    Potential detail: 'statement'
[06/Jul/2019 19:51:44 +0800] exceptions_renderable ERROR    Potential trace: [('/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/views/editor2.py', 413, '_submit_workflow_helper', 'job_id = _submit_workflow(request.user, request.fs, request.jt, workflow, mapping)'), ('/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/views/editor2.py', 453, '_submit_workflow', 'job_id = submission.run()'), ('/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/desktop/libs/liboozie/src/liboozie/submission2.py', 58, 'decorate', 'deployment_dir = self.deploy()'), ('/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/desktop/libs/liboozie/src/liboozie/submission2.py', 370, 'deploy', "action.data['type'] in ('sqoop', 'sqoop-document') and action.data['properties']['statement'] in '--hive-import'):")]
[06/Jul/2019 19:51:44 +0800] middleware   INFO     Processing exception: Workflow 提交失败: Traceback (most recent call last):
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/core/handlers/base.py", line 185, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/build/env/lib/python2.7/site-packages/Django-1.11-py2.7.egg/django/utils/decorators.py", line 185, in inner
    return func(*args, **kwargs)
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/decorators.py", line 115, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/decorators.py", line 77, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/views/editor2.py", line 369, in submit_workflow
    return _submit_workflow_helper(request, workflow, submit_action=reverse('oozie:editor_submit_workflow', kwargs={'doc_id': workflow.id}))
  File "/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hue/apps/oozie/src/oozie/views/editor2.py", line 415, in _submit_workflow_helper
    raise PopupException(_('Workflow submission failed'), detail=smart_str(e), error_code=200)

1.png

 

if i use sqoop in hue editor: 

 

	... 83 more
22:23:44.775 [4430f9a6-d62d-47db-add0-b8c79715be8f main] WARN  hive.metastore - Failed to connect to the MetaStore Server...
22:23:44.776 [4430f9a6-d62d-47db-add0-b8c79715be8f main] INFO  hive.metastore - Waiting 1 seconds before next connection attempt.
22:23:45.776 [4430f9a6-d62d-47db-add0-b8c79715be8f main] INFO  hive.metastore - Trying to connect to metastore with URI thrift://oyoshbddnprd2.ahotels.tech:9083
22:23:45.779 [4430f9a6-d62d-47db-add0-b8c79715be8f main] ERROR org.apache.thrift.transport.TSaslTransport - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed

could you give me some advises?  

 

it's really impact too much, cause many bi user and etl user can't use this function./