Support Questions
Find answers, ask questions, and share your expertise

Pig script in Hue - START_RETRY status

Highlighted

Pig script in Hue - START_RETRY status

Explorer

Dear Friends

 

Recently, after adding Flume service in CM, our one line Pig test script (below) in Hue stop working and we start getting Status:START_RETRY

In Pig Script editor in Hue.

A = load '/andy/ZipCodes.csv';

Would you please help? Please let me know if you need more info.  Thanks in advance. 

 

Seems the job not even submitted to Oozie. The only solution is to restart Pig and Oozie services in CM but it only works for a day! the next day, I need to do the same.

 

P.S. I have removed the Flume service and the problem solved which means next day, the script start working but we need to use Flume so this solution does not help much L. Would you please help? Thanks in advance.

 

The is no error in the script as it works after the 2 services restarts.

We recently upgraded CM and CDH to 5.3 but still the same issue.

Here is the info I got from the Hue Server Logs (please see the last line). 

 

base ERROR Internal Server Error: /pig/dashboard/
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hue/apps/oozie/src/oozie/views/dashboard.py", line 110, in decorate
return view_func(request, *args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hue/apps/pig/src/pig/views.py", line 57, in dashboard
jobs = pig_api.get_jobs()
File "/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hue/apps/pig/src/pig/api.py", line 143, in get_jobs
return get_oozie(self.user).get_workflows(**kwargs).jobs

TypeError: get_workflows() got an unexpected keyword argument 'user'

 

 

9 REPLIES 9
Highlighted

Re: Pig script in Hue - START_RETRY status

You are hitting this bug

that is fixed in 5.3.

You did not totally upgrade to 5.3 as we can see that it is still 5.2:
CDH-5.2.0-1.cdh5.2.0.p0.36

We recommend to upgrade to 5.3 (or manually modify Hue with above change in
the link), but upgrading to 5.3 will just fix it.

Romain

Highlighted

Re: Pig script in Hue - START_RETRY status

Explorer

Thanks Romain

 

We upgraded both CDH and CM to 5.3 last week. I will provide the new error messages but I think it will be the same. 

P.S. I can ran the same pig command in the grunt successfully so there is something with pig editor in Hue I guess. 

 

Just FYI, we are using Kerberos/ldap and AD users to login to Hue. 

 

Much appreciate your support and please let me know if you have any question. 

 

Kind regards

Andy

Re: Pig script in Hue - START_RETRY status

Yes, please share the logs, I jut tested on C5.3 with Kerberos and it
worked for me.

Romain

Highlighted

Re: Pig script in Hue - START_RETRY status

Explorer

Can you tell me where can I get the recent log? much apprecaite it. 

Here is the short version I got from server log in hue. 

 

[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_response(): returning <Response [200]>
[29/Dec/2014 16:56:53 -0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_other(): Handling: 200


[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG "GET /oozie/v1/job/0000002-141228125521164-oozie-oozi-W?timezone=America%2FLos_Angeles&doAs=<my_username> HTTP/1.1" 200 3293
[29/Dec/2014 16:56:53 -0800] resource DEBUG GET Got response: {"apps":null}
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_response(): returning <Response [200]>
[29/Dec/2014 16:56:53 -0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_other(): Handling: 200
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG "GET /ws/v1/cluster/apps?user=<my_user_name>&finalStatus=UNDEFINED HTTP/1.1" 200 None
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG Setting read timeout to None
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG Setting read timeout to None
[29/Dec/2014 16:56:53 -0800] access INFO <server_ip and my_username> - "GET /jobbrowser/ HTTP/1.1"
[29/Dec/2014 16:56:53 -0800] resource DEBUG GET Got response: {"total":153,"workflows":[{"appP...
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_response(): returning <Response [200]>
[29/Dec/2014 16:56:53 -0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_other(): Handling: 200
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG "GET /oozie/v1/jobs?filter=user%3D<mysusername>%3Bname%3Dpig-app-hue-script&timezone=America%2FLos_Angeles&jobtype=wf&len=100&doAs=<myusername> HTTP/1.1" 200 None
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG Setting read timeout to None
[29/Dec/2014 16:56:53 -0800] connectionpool INFO Resetting dropped connection: <our_server_name>
[29/Dec/2014 16:56:53 -0800] access INFO <server_ip and my username> - "GET /pig/dashboard/ HTTP/1.1"
[29/Dec/2014 16:56:53 -0800] api WARNING Autocomplete data fetching error default.None: Bad status for request TGetTablesReq(schemaName=u'default', sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\xfd\xe7=\xeb
\xa9\xbfJ\xc2\x91\x8b\xee\x07.j\xd3\xc5', guid='\xadu\x81e\xb0\x9aI\xaa\xb8\xb5-\x86\xcd\x03\xe7\x8c')), tableName='.*', tableTypes=None, catalogName=None):
TGetTablesResp(status=TStatus(errorCode=0, errorMessage='java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient', sqlState=None, infoMessages=None,
statusCode=3), operationHandle=None)
[29/Dec/2014 16:56:53 -0800] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.GetTables returned in 3018ms: TGetTablesResp(status=TStatus(errorCode=0, errorMessage='java.lang.RuntimeException:
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient', sqlState=None, infoMessages=None, statusCode=3), operationHandle=None)
[29/Dec/2014 16:56:50 -0800] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.GetTables(args=(TGetTablesReq(schemaName=u'default', sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\xfd\xe7=\xeb
\xa9\xbfJ\xc2\x91\x8b\xee\x07.j\xd3\xc5', guid='\xadu\x81e\xb0\x9aI\xaa\xb8\xb5-\x86\xcd\x03\xe7\x8c')), tableName='.*', tableTypes=None, catalogName=None),), kwargs={})
[29/Dec/2014 16:56:50 -0800] dbms DEBUG Query Server: {'server_host': '<our_server_name.com', 'server_port': 10000, 'server_name': 'beeswax', 'principal': 'hive/<our_server_name.com@realm'}
[29/Dec/2014 16:56:50 -0800] access INFO <server_ip and my_user_name> - "GET /beeswax/api/autocomplete/default HTTP/1.1"

 

Kind regards

Andy

Highlighted

Re: Pig script in Hue - START_RETRY status

Could you click on the status on the top right corner in the Pig Editor,
then on the Log icon of the Pig action in the dashboard, it will provide
more interesting logs (as Oozie calls work).
START_RETRY status

Above are warnings that should not be the problem:
https://issues.cloudera.org/browse/HUE-2198
https://issues.cloudera.org/browse/HUE-2353

This is related to your HiveServer2 being misconfiguration and so it
something else:
java.lang.RuntimeException: java.lang.RuntimeException: Unable to
instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

Romain


Highlighted

Re: Pig script in Hue - START_RETRY status

Explorer

Thanks Romain 

Here is the log info obtained by following your procedure.

I am also reviewing it now. much appreciate your support and have a great day.

 

2014-12-30 11:35:54,537 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@:start:] Start action [0000004-141228125521164-oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2014-12-30 11:35:54,537 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@:start:] [***0000004-141228125521164-oozie-oozi-W@:start:***]Action status=DONE
2014-12-30 11:35:54,537 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@:start:] [***0000004-141228125521164-oozie-oozi-W@:start:***]Action updated in DB!
2014-12-30 11:35:54,630 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Start action [0000004-141228125521164-oozie-oozi-W@pig] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2014-12-30 11:35:54,912 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Error starting action [pig]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: HTTP status [403], message [Forbidden]]
org.apache.oozie.action.ActionExecutorException: JA009: HTTP status [403], message [Forbidden]
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:396)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:990)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1145)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: HTTP status [403], message [Forbidden]
at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:223)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:145)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:346)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:799)
at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2017)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:127)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:556)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:975)
... 10 more
2014-12-30 11:35:54,913 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Next Retry, Attempt Number [1] in [60,000] milliseconds
2014-12-30 11:36:54,952 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Start action [0000004-141228125521164-oozie-oozi-W@pig] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2014-12-30 11:36:55,256 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Error starting action [pig]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: HTTP status [403], message [Forbidden]]
org.apache.oozie.action.ActionExecutorException: JA009: HTTP status [403], message [Forbidden]
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:396)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:990)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1145)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: HTTP status [403], message [Forbidden]
at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:223)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:145)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:346)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:799)
at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2017)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:127)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:556)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:975)
... 8 more
2014-12-30 11:36:55,256 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Next Retry, Attempt Number [2] in [60,000] milliseconds
2014-12-30 11:37:55,370 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Start action [0000004-141228125521164-oozie-oozi-W@pig] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2014-12-30 11:37:55,653 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Error starting action [pig]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: HTTP status [403], message [Forbidden]]
org.apache.oozie.action.ActionExecutorException: JA009: HTTP status [403], message [Forbidden]
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:396)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:990)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1145)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: HTTP status [403], message [Forbidden]
at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:223)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:145)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:346)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:799)
at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2017)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:127)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:556)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:430)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:975)
... 8 more
2014-12-30 11:37:55,654 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[<our_server_name.com>] USER[<my_active_dir_user_name>] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000004-141228125521164-oozie-oozi-W] ACTION[0000004-141228125521164-oozie-oozi-W@pig] Next Retry, Attempt Number [3] in [60,000] milliseconds

Highlighted

Re: Pig script in Hue - START_RETRY status

This is probably related to kms, as stated somewhere else, I hope they will
be able to help about that!

Romain

Highlighted

Re: Pig script in Hue - START_RETRY status

Explorer

Thanks much Romain for your time and attention. Couple of quick quesiton: 

 

  1. Can you help me by providing the step I need to take from here. Last week, I have created a new post at http://community.cloudera.com/t5/Data-Ingestion-Integration/KMS-AuthenticationToken-ignored-Invalid-...
  2. I guess our issue is related to the bug (https://issues.apache.org/jira/browse/HADOOP-11151). Is there is way you can help me find out if this issue is going to be fixed in 5.3.1 as it seems it is not fixed in 5.3.
  3. If my assumption about the existing bug is correct, I hope I can find a safe workaround for that.

 

P.S. I can run the pig job using grunt with no problem so maybe it something related to the way Hue deals with delegation token (owner=my_active_dir_user, realuser=oozie/ourserver@our_realm) as you can see from the error message from the new post above.

 

Appreciate your professional support.

 

Kind regards

Andy

Highlighted

Re: Pig script in Hue - START_RETRY status

Explorer

Hi Romain

 

I have just ran the simple one line pig script in Hue again and here is the new log (short version). Hope it helps. 

Thanks Romain.

Just FYI, the last 4 lines below repeat many times and maybe that is due to the problem. Hue tries to submit the job to Oozie but cannot maybe due to Kerberos authentication failure.

 

handle_response(): returning <Response [200]>
[29/Dec/2014 16:56:53 -0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response
[29/Dec/2014 16:56:53 -0800] kerberos_ DEBUG handle_other(): Handling: 200
[29/Dec/2014 16:56:53 -0800] connectionpool DEBUG

Resetting dropped connection: <our server name which is removed for security reason>
[29/Dec/2014 16:56:53 -0800] access INFO < IP_ADDRESS and Active Directory user name which are removed for security reason>- "GET /pig/dashboard/ HTTP/1.1"
[29/Dec/2014 16:56:53 -0800] api WARNING Autocomplete data fetching error default.None: Bad status for request TGetTablesReq(schemaName=u'default',