Member since
02-10-2019
47
Posts
9
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4054 | 07-15-2019 12:04 PM | |
3244 | 11-03-2018 05:00 AM | |
5679 | 10-24-2018 07:38 AM | |
6568 | 10-08-2018 09:47 AM | |
1712 | 08-17-2018 06:33 AM |
07-15-2019
12:04 PM
1 Kudo
@Javert Kirilov The config json of ats-hbase should be created on startup of ResourceManager. If it wasn't created, then check the logs of the ResourceManager for any errors related to ats-hbase while starting up.
... View more
11-06-2018
06:27 AM
@Sam Hjelmfelt I don't think setting YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE in yarn-env.sh, will set the same in the ContainerLaunchContext. Have you tried setting it in the service spec itself to see if it helps ?
... View more
11-03-2018
05:00 AM
1 Kudo
@Sam Hjelmfelt Running Docker containers which have ENTRYPOINT in YARN Services requires additional configuration to be specified in the service spec. The env variable YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE needs to be set to true. Additionally the launch command parameters are separated with commas instead of space. Try running with the below spec. {
"name": "myapp",
"version": "1.0.0",
"description": "myapp",
"components": [
{
"name": "myappcontainers",
"number_of_containers": 1,
"artifact": {
"id": "myapp:1.0-SNAPSHOT",
"type": "DOCKER"
},
"launch_command": "input1,input2",
"resource": {
"cpus": 1,
"memory": "256"
},
"configuration": {
"env": {
"YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE": "true"
}
}
}
]
} For further reference, refer to the documentation here
... View more
10-29-2018
10:54 AM
@john x ResourceManager will only contain the information for containers which are currently running for an application. Some containers which are already finished will not be present in ResourceManager, but they will be present in App Timeline Server. So the container list command will try to fetch from from both ResourceManager and App Timeline Server. The error shows that the App Timeline Server does not contain the application. Maybe the app timeline server had issues or was not running when this application was submitted.
... View more
10-24-2018
11:06 AM
@Prashant Gupta Good to know that the ResourceManager started successfully. Kindly mark the answer as accepted if the problem got resolved.
... View more
10-24-2018
07:38 AM
@Prashant Gupta From your logs attached, it looks like you have enabled GPU Scheduling. But it is still using the DefaultResourceCalculator. 2018-10-22 17:48:02,490 FATAL resourcemanager.ResourceManager (ResourceManager.java:main(1495)) - Error starting ResourceManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: RM uses DefaultResourceCalculator which used only memory as resource-type but invalid resource-types specified {yarn.io/gpu=name: yarn.io/gpu, units: , type: COUNTABLE, value: 0, minimum allocation: 0, maximum allocation: 9223372036854775807, memory-mb=name: memory-mb, units: Mi, type: COUNTABLE, value: 0, minimum allocation: 1024, maximum allocation: 191488, vcores=name: vcores, units: , type: COUNTABLE, value: 0, minimum allocation: 1, maximum allocation: 32}. Use DomainantResourceCalculator instead to make effective use of these resource-types In YARN->Configs->Advanced->Scheduler , set the following yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
... View more
10-09-2018
09:36 AM
Yes. Actually a user can only run a single job at any moment. To run multiple jobs at a moment, they all need to be submit as different users.
... View more
10-09-2018
08:05 AM
@Soumitra Sulav The problem seems to be because the FileStatus returned by OzoneFileSystem does not have the owner field set and so its empty. As a result the ownership check fails. One workaround I see is to delete the /tmp/hadoop-yarn/staging/hdfs/.staging directory before submitting the Mapreduce job. Then this ownership check gets bypassed and the staging directory will be created again. But this means that you can't have more than one job using the /tmp/hadoop-yarn/staging/hdfs/.staging directory. So its not a good workaround, although the only available one from what I see (Apart from code change in Mapreduce/Ozone) .
... View more
10-08-2018
01:21 PM
Good to know you got it resolved. You can accept the answer if it helped. One more thing to note is that java debug doesn't work if more than one map container is launched in the same node. This is because both map container processes will try to listen on the debug port 8787 and might fail.
... View more
10-08-2018
09:47 AM
@Eddie Generally specifying the mapreduce.map.java.opts in quotes will work for all the example jobs. The following command running a pi job worked. yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar pi -Dmapreduce.map.java.opts="-XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCDateStamps -XX:SurvivorRatio=8" 1 1 I see that your command uses your specific class Myclass.class. The example pi job works because it parses the command line options using org.apache.hadoop.util.GenericOptionsParser Your MyClass.class should use org.apache.hadoop.util.GenericOptionsParser to parse the command line options for it to work properly.
... View more