Support Questions

Find answers, ask questions, and share your expertise

Pig job execution stops at 0%

Expert Contributor


I am using Hortonworks Data Cloud on AWS. I have created a cluster with 1 master node and 1 worker node, each having 15 GB memory. When I execute the pig script from pig grunt shell the execution stops at 0%.

Restarted the Yarn service and Job History Server, But still facing the same issue.

How can I resolve this issue ?

Attached Image is my terminal showing Job execution which is stoped.

img-10042018-160821-0.pngThank You.


@heta desai,

Check whether the application is in RUNNING state or ACCEPTED state. If it is in ACCEPTED state, this may be because of resource crunch. If the problem is due to resources try adding a node manager and see if it goes to RUNNING state.



Expert Contributor

@Aditya SirnaThe application was in ACCEPTED state. I terminated the existing node manager and added new but still it is in ACCEPTED state.

Now It's throwing following error:

ERROR - ERROR 1066: Unable to open iterator for alias fltr Details at logfile: /home/cloudbreak/pig_1523363770240.log

@heta desai,

There is not much info here. Can you please click on application in accepted state and paste the logs. Also can you please attach /home/cloudbreak/pig_1523363770240.log file.

Expert Contributor

@Aditya Sirna

Attached image shows the application in accepted state:


actually I have created a new cluster but still facing the same issue.

sorry as I have deleted the cluster so I can not give you the log file.

Below is the image that shows the stack trace while i execute script:


As you can see in above image the execution is stop.

@heta desai,

Based on the log, your YARN Application Manager resource limit exceeded. You can try increasing "Maximum AM Resource" percentage in Yarn Queue Manager and see if it works.Go to Ambari -> Click on Views Icon (placed top right in UI) -> Yarn Queue Manager -> Increase Maximum AM resource by 10% and refresh queues.