Support Questions

Find answers, ask questions, and share your expertise

How do I confirm that YARN manages docker container application ?

avatar
Explorer

Hello every one.

I try to use Docker on YARN.

I want to know how to manage the docker container application on YARN.

My Environment :

  • Apache Hadoop 3.1.1
  • Docker CE (18.09)
  • Ubuntu 16.04 LTS

My Node count : 1 (This is pseudo distributed mode)

My Node has below components :

  • NameNode
  • Secondary NameNode
  • DataNode
  • ResourceManager
  • NodeManager
  • JobHistoryServer

I have tried below.

https://jp.hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/

To use docker on YARN, I have to configure two files, yarn-site.xml and container-executor.cfg.

Below is my configuration.

■ yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>


    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>my-hdp01</value>
    </property>


    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>3</value>
    </property>


   <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>1536</value>
   </property>


   <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>1536</value>
   </property>


   <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
   </property>


    <property>
      <name>yarn.resourcemanager.address</name>
      <value>my-hdp01:8050</value>
    </property>


    <property>
      <name>yarn.resourcemanager.scheduler.address</name>
      <value>my-hdp01:8030</value>
    </property>


    <property>
      <name>yarn.resourcemanager.resource-tracker.address</name>
      <value>my-hdp01:8025</value>
    </property>


    <property>
      <name>yarn.resourcemanager.admin.address</name>
      <value>my-hdp01:8141</value>
    </property>


    <property>
      <name>yarn.resourcemanager.webapp.address</name>
      <value>my-hdp01:8088</value>
    </property>


    <property>
      <name>yarn.resourcemanager.scheduler.class</name>
      <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>


    <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>1024</value>
    </property>


    <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>2048</value>
    </property>


    <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>2048</value>
    </property>


    <property>
      <name>yarn.nodemanager.log.retain-seconds</name>
      <value>10800</value>
    </property>


    <property>
      <name>yarn.nodemanager.remote-app-log-dir</name>
      <value>/app-logs</value>
    </property>


    <property>
      <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
      <value>logs</value>
    </property>


    <property>
      <name>yarn.nodemanager.health-checker.interval-ms</name>
      <value>135000</value>
    </property>


    <property>
      <name>yarn.nodemanager.health-checker.script.timeout-ms</name>
      <value>60000</value>
    </property>


    <property>
      <name>yarn.nodemanager.local-dirs</name>
      <value>/var/local/hadoop/cache/hadoop/nm-local-dir</value>
    </property>


    <property>
      <name>yarn.nodemanager.log-dirs</name>
      <value>/home/hadoop/hadoop-3.1.1/logs/userlogs</value>
    </property>


<!-- Docker on YARN configuration properties -->
  <property>
    <name>yarn.nodemanager.container-executor.class</name>
    <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
    <description>
      This is the container executor setting that ensures that all applications
      are started with the LinuxContainerExecutor.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.linux-container-executor.group</name>
    <value>hadoop</value>
    <description>
      The POSIX group of the NodeManager. It should match the setting in
      "container-executor.cfg". This configuration is required for validating
      the secure access of the container-executor binary.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users</name>
    <value>true</value>
    <description>
      Whether all applications should be run as the NodeManager process' owner.
      When false, applications are launched instead as the application owner.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user</name>
    <value>nobody</value>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.allowed-runtimes</name>
    <value>default,docker</value>
    <description>
      Comma separated list of runtimes that are allowed when using
      LinuxContainerExecutor. The allowed values are default, docker, and
      javasandbox.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.docker.allowed-container-networks</name>
    <value>host,none,bridge</value>
    <description>
      Optional. A comma-separated set of networks allowed when launching
      containers. Valid values are determined by Docker networks available from
      `docker network ls`
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.docker.default-container-network</name>
    <value>host</value>
    <description>
      The network used when launching Docker containers when no
      network is specified in the request. This network must be one of the
      (configurable) set of allowed container networks.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed</name>
    <value>false</value>
    <description>
      Optional. Whether applications are allowed to run in privileged
      containers.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.acl</name>
    <value></value>
    <description>
      Optional. A comma-separated list of users who are allowed to request
      privileged contains if privileged containers are allowed.
    </description>
  </property>


  <property>
    <name>yarn.nodemanager.runtime.linux.docker.capabilities</name>
    <value>CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE</value>
    <description>
      Optional. This configuration setting determines the capabilities
      assigned to docker containers when they are launched. While these may not
      be case-sensitive from a docker perspective, it is best to keep these
      uppercase. To run without any capabilites, set this value to
      "none" or "NONE"
    </description>
  </property>
</configuration>

■ container-executor.cfg

yarn.nodemanager.local-dirs=/var/local/hadoop/cache/hadoop/nm-local-dir
yarn.nodemanager.log-dirs=/home/hadoop/hadoop-3.1.1/logs/userlogs
yarn.nodemanager.linux-container-executor.group=hadoop # configured value of yarn.nodemanager.linux-container-executor.group
banned.users=hdfs,yarn,mapred,bin # comma separated list of users who can not run applications
min.user.id=50 # Prevent other super-users
# allowed.system.users=# # comma separated list of system users who CAN run applications
# feature.tc.enabled=1

# The configs below deal with settings for Docker
[docker]
module.enabled=true # enable/disable the module. set to "true" to enable, disabled by default
docker.binary=/usr/bin/docker
docker.allowed.capabilities=CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE,DAC_READ_SEARCH,SYS_PTRACE,SYS_ADMIN
#  docker.allowed.devices=## comma seperated list of devices that can be mounted into a container
docker.allowed.networks=bridge,host,none ## comma seperated networks that can be used. e.g bridge,host,none
docker.allowed.ro-mounts=/sys/fs/cgroup,/var/local/hadoop/cache/hadoop/nm-local-dir ## comma seperated volumes that can be mounted as read-only
docker.allowed.rw-mounts=/var/local/hadoop/cache/hadoop/nm-local-dir,/home/hadoop/hadoop-3.1.1/logs/userlogs ## comma seperate volumes that can be mounted as read-write, add the yarn local and log dirs to this list to run Hadoop jobs
docker.trusted.registries=local,local/centos
#  docker.privileged-containers.enabled=false
#  docker.allowed.volume-drivers=## comma seperated list of allowed volume-drivers
#  docker.no-new-privileges.enabled=## enable/disable the no-new-privileges flag for docker run. Set to "true" to enable, disabled by default

# The configs below deal with settings for FPGA resource
#[fpga]
#  module.enabled=## Enable/Disable the FPGA resource handler module. set to "true" to enable, disabled by default
#  fpga.major-device-number=## Major device number of FPGA, by default is 246. Strongly recommend setting this
#  fpga.allowed-device-minor-numbers=## Comma separated allowed minor device numbers, empty means all FPGA devices managed by YARN.

Next, I set below Environment variables.

  • DSHELL_JAR : /home/hadoop/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-*.jar
  • RUNTIME : docker
  • DOCKER_IMAGE : local/centos:latest
  • DSHELL_CMD : hostname
  • NUM_OF_CONTAINERS : 1

Then, I executed below command with reference to upper URL.

$ yarn jar $DSHELL_JAR \
-shell_env YARN_CONTAINER_RUNTIME_TYPE="$RUNTIME" \
-shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE="$DOCKER_IMAGE" \
-shell_command $DSHELL_CMD \
-jar $DSHELL_JAR \
-num_containers $NUM_OF_CONTAINERS

The result is below.

hadoop@my-hdp01:~/hadoop-3.1.1$ yarn jar $DSHELL_JAR -shell_env YARN_CONTAINER_RUNTIME_TYPE="$RUNTIME" -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE="$DOCKER_IMAGE" -shell_command $DSHELL_CMD -jar $DSHELL_JAR -num_containers $NUM_OF_CONTAINERS
2019-02-05 17:43:09,271 INFO distributedshell.Client: Initializing Client
2019-02-05 17:43:09,282 INFO distributedshell.Client: Running Client
2019-02-05 17:43:09,805 INFO client.RMProxy: Connecting to ResourceManager at my-hdp01/192.168.56.2:8050
2019-02-05 17:43:10,530 INFO distributedshell.Client: Got Cluster metric info from ASM, numNodeManagers=1
2019-02-05 17:43:10,564 INFO distributedshell.Client: Got Cluster node info from ASM
2019-02-05 17:43:10,568 INFO distributedshell.Client: Got node report from ASM for, nodeId=my-hdp01:43059, nodeAddress=my-hdp01:8042, nodeRackName=/default-rack, nodeNumContainers=0
2019-02-05 17:43:10,612 INFO distributedshell.Client: Queue info, queueName=default, queueCurrentCapacity=0.0, queueMaxCapacity=1.0, queueApplicationCount=0, queueChildQueueCount=0
2019-02-05 17:43:10,638 INFO distributedshell.Client: User ACL Info for Queue, queueName=root, userAcl=SUBMIT_APPLICATIONS
2019-02-05 17:43:10,639 INFO distributedshell.Client: User ACL Info for Queue, queueName=root, userAcl=ADMINISTER_QUEUE
2019-02-05 17:43:10,639 INFO distributedshell.Client: User ACL Info for Queue, queueName=default, userAcl=SUBMIT_APPLICATIONS
2019-02-05 17:43:10,639 INFO distributedshell.Client: User ACL Info for Queue, queueName=default, userAcl=ADMINISTER_QUEUE
2019-02-05 17:43:10,692 INFO distributedshell.Client: Max mem capability of resources in this cluster 2048
2019-02-05 17:43:10,693 INFO distributedshell.Client: Max virtual cores capability of resources in this cluster 4
2019-02-05 17:43:10,712 WARN distributedshell.Client: AM Memory not specified, use 100 mb as AM memory
2019-02-05 17:43:10,712 WARN distributedshell.Client: AM vcore not specified, use 1 mb as AM vcores
2019-02-05 17:43:10,712 WARN distributedshell.Client: AM Resource capability=<memory:100, vCores:1>
2019-02-05 17:43:10,713 INFO distributedshell.Client: Copy App Master jar from local filesystem and add to local environment
2019-02-05 17:43:12,361 INFO distributedshell.Client: Set the environment for the application master
2019-02-05 17:43:12,362 INFO distributedshell.Client: Setting up app master command
2019-02-05 17:43:12,363 INFO distributedshell.Client: Completed setting up app master command {{JAVA_HOME}}/bin/java -Xmx100m org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --container_type GUARANTEED --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 --shell_env YARN_CONTAINER_RUNTIME_TYPE=docker --shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=local/centos:latest 1><LOG_DIR>/AppMaster.stdout 2><LOG_DIR>/AppMaster.stderr
2019-02-05 17:43:12,385 INFO distributedshell.Client: Submitting application to ASM
2019-02-05 17:43:12,495 INFO impl.YarnClientImpl: Submitted application application_1549355593168_0002
2019-02-05 17:43:13,507 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:14,511 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:15,516 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:16,522 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:17,530 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:18,538 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:19,543 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=AM container is launched, waiting for AM container to Register with RM, appMasterHost=N/A, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=ACCEPTED, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:20,548 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:21,552 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:22,581 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:23,589 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:24,597 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:25,607 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:26,614 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:27,621 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:28,631 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=RUNNING, distributedFinalState=UNDEFINED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:29,641 INFO distributedshell.Client: Got application report from ASM for, appId=2, clientToAMToken=null, appDiagnostics=, appMasterHost=my-hdp01/192.168.56.2, appQueue=default, appMasterRpcPort=-1, appStartTime=1549356192417, yarnAppState=FINISHED, distributedFinalState=SUCCEEDED, appTrackingUrl=http://my-hdp01:8088/proxy/application_1549355593168_0002/, appUser=hadoop
2019-02-05 17:43:29,643 INFO distributedshell.Client: Application has completed successfully. Breaking monitoring loop
2019-02-05 17:43:29,644 INFO distributedshell.Client: Application completed successfully

I confirmed that it shows "Application completed successfully" on the last part.

But I don't know how to check my docker container on YARN.

How do I confirm that YARN manages docker container application ?

4 REPLIES 4

avatar
Explorer

I have another error...


2019-03-07 15:09:59,925 INFO distributedshell.Client: Got application report from ASM for, appId=3, clientToAMToken=null, appDiagnostics=Application Failure: desired = 1, completed = 1, allocated = 1, failed = 1, diagnostics = [2019-03-07 15:09:53.099]Exception from container-launch.
Container id: container_1551938387294_0003_01_000002
Exit code: 32
Exception message: Launch container failed
Shell error output: Feature disabled: docker

Shell output: main : command provided 4
main : run as user is massu
main : requested yarn user is root


Can you help me ?

avatar
New Contributor

Hi @t_masuoka , in your question you mentioned that Application completed successfully. What did you change to get the following error: 

Shell error output: Feature disabled: docker

 

avatar
Explorer

Hi,

I tried the below URL.

https://community.hortonworks.com/articles/226331/dockerized-yarn-services-quickstart.html


But I can't lanch the Docker Container application on YARN.

I have error when I run the below command.

$ curl -X POST -H "Content-Type: application/json" http://my-hdp01.test.com:8088/app/v1/services?user.name=massu -d @yarnservice.json

ERROR↓

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 500 Server Error</title>
</head>
<body><h2>HTTP ERROR 500</h2>
<p>Problem accessing /app/v1/services. Reason:
<pre>    Server Error</pre></p><h3>Caused by:</h3><pre>javax.servlet.ServletException: API-Service@8899a5c2==com.sun.jersey.spi.container.servlet.ServletContainer,jsp=null,order=-1,inst=false
        at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:664)
        at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499)
        at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791)
        at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:534)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
        at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
        at java.lang.Thread.run(Thread.java:748)
Caused by: com.sun.jersey.api.container.ContainerException: The ResourceConfig instance does not contain any root resource classes.
        at com.sun.jersey.server.impl.application.RootResourceUriRules.&lt;init&gt;(RootResourceUriRules.java:99)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
        at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
        at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
        at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790)
        at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509)
        at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
        at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
        at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
        at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
        at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577)
        at javax.servlet.GenericServlet.init(GenericServlet.java:244)
        at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643)
        ... 26 more
</pre>
<h3>Caused by:</h3><pre>com.sun.jersey.api.container.ContainerException: The ResourceConfig instance does not contain any root resource classes.
        at com.sun.jersey.server.impl.application.RootResourceUriRules.&lt;init&gt;(RootResourceUriRules.java:99)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._initiate(WebApplicationImpl.java:1359)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.access$700(WebApplicationImpl.java:180)
        at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:799)
        at com.sun.jersey.server.impl.application.WebApplicationImpl$13.f(WebApplicationImpl.java:795)
        at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:795)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.initiate(WebApplicationImpl.java:790)
        at com.sun.jersey.spi.container.servlet.ServletContainer.initiate(ServletContainer.java:509)
        at com.sun.jersey.spi.container.servlet.ServletContainer$InternalWebComponent.initiate(ServletContainer.java:339)
        at com.sun.jersey.spi.container.servlet.WebComponent.load(WebComponent.java:605)
        at com.sun.jersey.spi.container.servlet.WebComponent.init(WebComponent.java:207)
        at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:394)
        at com.sun.jersey.spi.container.servlet.ServletContainer.init(ServletContainer.java:577)
        at javax.servlet.GenericServlet.init(GenericServlet.java:244)
        at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:643)
        at org.eclipse.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:499)
        at org.eclipse.jetty.servlet.ServletHolder.ensureInstance(ServletHolder.java:791)
        at org.eclipse.jetty.servlet.ServletHolder.prepare(ServletHolder.java:776)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:579)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:534)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
        at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
        at java.lang.Thread.run(Thread.java:748)
</pre>

</body>
</html>


Can you help me?



avatar
New Contributor

Share yarnservice.json file which you are POSTing via curl.