Member since
09-16-2016
10
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1046 | 03-17-2017 01:10 PM |
03-14-2018
07:40 AM
1 Kudo
Hey @kirk chou- sorry for not having posted this earlier. This was due to differences in system $PATH between the RHEL6 and the RHEL7 hosts (the 'usr/bin/ln' vs '/bin/ln' command in this case). Oozie forcefully overrides the ShellAction child tasks execution context, especially $PATH, as defined in the application master nodes. If values in $PATH on the application master node differ from $PATH on the worker node, then the task will fail on the worker node. Hope this helps, -Regis
... View more
09-26-2017
09:47 AM
Hi all - we have a cluster of RHEL6 and RHEL7 nodes. When oozie launches a workflow and uses a RHEL7 node as application master, the tasks dispatched to RHEL6 nodes fail to execute the launch_container.sh script. As per the log, it looks like the PATH could be not set properly, see below, as it cannot find the "ln" command. Stack trace: ExitCodeException exitCode=127: /data/d9/yarn/nm/usercache/hdfs/appcache/application_1506341577822_0486/container_e102_1506341577822_0486_01_000004/launch_container.sh: line 30: ln: command not found
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 127
The log from the job looks like below 2017-09-25 19:05:12,937 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1506341577822_0486_m_000000 Task Transitioned from SCHEDULED to RUNNING
2017-09-25 19:05:13,701 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1506341577822_0486: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:76800, vCores:0> knownNMs=2
2017-09-25 19:05:14,707 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_e102_1506341577822_0486_01_000002
2017-09-25 19:05:14,708 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2017-09-25 19:05:14,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1506341577822_0486_m_000000_0 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
2017-09-25 19:05:14,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1506341577822_0486_m_000000_0: Exception from container-launch. When MR jobs are not triggered by Oozie, all is fine on those RHEL7 nodes. I'm able to reproduce the issue with HDP 2.3.4 and 2.3.6. I could not find any known Oozie issue that got fixed in later versions. Any help or pointers welcome. Best, -Regis
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
03-17-2017
01:10 PM
My bad, the example pig script in the apache example assumed that quota is int - in my case, it's a long. changing the type fixed the issue.
... View more
03-16-2017
07:37 PM
1 Kudo
All, I would like to generate a list of all HDFS directories for which a quota has been set, and report the quota size. I have used dfs -count successfully from the command line - the shortcoming is, it is expensive, and running it recursively on every folder of a large HDFS Production cluster is probably not a good idea. I tried a different approach using fsimage, and oiv to Delimited format. However, the namespace quota and diskspace quota values are consistently -1, 0 or blank. I cannot seem to get the quota value anywhere. If you have pointers to why this is happening, or an alternative approach to achieve this, I'd love to hear it 🙂 Kindest regards, -Regis
... View more
Labels:
- Labels:
-
Apache Hadoop
09-16-2016
02:07 PM
@Artem Ervits Restarted the ambari-server, still no quick links. yes I can capture bundles.
... View more
09-16-2016
11:57 AM
1 Kudo
Hi, I have installed Smartsense 1.3 and use Ambari 2.2.2.0. When I go to the SmartSense Service to access the Activity Explorer, there are no quicklinks available in the summary section. As per documentation below, it should be available. http://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.3.0/bk_user-guide/content/activity_explorer.html Anyone seeing the same behaviour? Thanks, -Regis
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks SmartSense