Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

main : requested yarn user is kadmin User kadmin not found

avatar
Contributor

Hi,teams:

When running the spark program, the user can not find!Please help me, thank you!

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--executor-memory 1G \
--num-executors 1 \
--num-executors 2 \
--driver-memory 1g \
--executor-cores 1 \
--principal kadmin/admin@NGAA.COM \
--keytab   /home/test/sparktest/princpal/sparkjob.keytab \
/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar 12

error messages:

17/02/10 13:54:16 INFO security.UserGroupInformation: Login successful for user kadmin/admin@NGAA.COM using keytab file /home/test/sparktest/princpal/sparkjob.keytab
17/02/10 13:54:16 INFO spark.SparkContext: Running Spark version 1.6.0
17/02/10 13:54:16 INFO spark.SecurityManager: Changing view acls to: root,kadmin
17/02/10 13:54:16 INFO spark.SecurityManager: Changing modify acls to: root,kadmin
17/02/10 13:54:16 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, kadmin); users with modify permissions: Set(root, kadmin)
17/02/10 13:54:17 INFO util.Utils: Successfully started service 'sparkDriver' on port 56214.
17/02/10 13:54:17 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/02/10 13:54:17 INFO Remoting: Starting remoting
17/02/10 13:54:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.10.100.51:40936]
17/02/10 13:54:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@10.10.100.51:40936]
17/02/10 13:54:18 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 40936.
17/02/10 13:54:18 INFO spark.SparkEnv: Registering MapOutputTracker
17/02/10 13:54:18 INFO spark.SparkEnv: Registering BlockManagerMaster
17/02/10 13:54:18 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-cf37cdde-4eab-4804-b84b-b5f937828aa7
17/02/10 13:54:18 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
17/02/10 13:54:18 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/02/10 13:54:19 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/02/10 13:54:19 INFO ui.SparkUI: Started SparkUI at http://10.10.100.51:4040
17/02/10 13:54:19 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar at spark://10.10.100.51:56214/jars/spark-examples.jar with timestamp 1486706059601
17/02/10 13:54:19 INFO yarn.Client: Attempting to login to the Kerberos using principal: kadmin/admin@NGAA.COM and keytab: /home/test/sparktest/princpal/sparkjob.keytab
17/02/10 13:54:19 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.10.100.51:8032
17/02/10 13:54:20 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
17/02/10 13:54:20 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/02/10 13:54:20 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/02/10 13:54:20 INFO yarn.Client: Setting up container launch context for our AM
17/02/10 13:54:20 INFO yarn.Client: Setting up the launch environment for our AM container
17/02/10 13:54:21 INFO yarn.Client: Credentials file set to: credentials-79afe260-414b-4df7-8242-3cd1a279dbc7
17/02/10 13:54:21 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://hadoop2:8020/user/kadmin/.sparkStaging/application_1486705141135_0002
17/02/10 13:54:21 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 44 for kadmin on 10.10.100.52:8020
17/02/10 13:54:21 INFO yarn.Client: Renewal Interval set to 86400061
17/02/10 13:54:21 INFO yarn.Client: Preparing resources for our AM container
17/02/10 13:54:21 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://hadoop2:8020/user/kadmin/.sparkStaging/application_1486705141135_0002
17/02/10 13:54:21 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 45 for kadmin on 10.10.100.52:8020
17/02/10 13:54:22 INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop1:9083
17/02/10 13:54:22 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/02/10 13:54:22 INFO hive.metastore: Connected to metastore.
17/02/10 13:54:22 INFO hive.metastore: Closed a connection to metastore, current connections: 0
17/02/10 13:54:23 INFO yarn.Client: To enable the AM to login from keytab, credentials are being copied over to the AM via the YARN Secure Distributed Cache.
17/02/10 13:54:23 INFO yarn.Client: Uploading resource file:/home/test/sparktest/princpal/sparkjob.keytab -> hdfs://hadoop2:8020/user/kadmin/.sparkStaging/application_1486705141135_0002/sparkjob.keytab
17/02/10 13:54:23 INFO yarn.Client: Uploading resource file:/tmp/spark-79d08367-6f8d-4cb3-813e-d450e90a3128/__spark_conf__4615276915023723512.zip -> hdfs://hadoop2:8020/user/kadmin/.sparkStaging/application_1486705141135_0002/__spark_conf__4615276915023723512.zip
17/02/10 13:54:23 INFO spark.SecurityManager: Changing view acls to: root,kadmin
17/02/10 13:54:23 INFO spark.SecurityManager: Changing modify acls to: root,kadmin
17/02/10 13:54:23 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, kadmin); users with modify permissions: Set(root, kadmin)
17/02/10 13:54:23 INFO yarn.Client: Submitting application 2 to ResourceManager
17/02/10 13:54:23 INFO impl.YarnClientImpl: Submitted application application_1486705141135_0002
17/02/10 13:54:24 INFO yarn.Client: Application report for application_1486705141135_0002 (state: FAILED)
17/02/10 13:54:24 INFO yarn.Client: 
	 client token: N/A
	 diagnostics: Application application_1486705141135_0002 failed 2 times due to AM Container for appattempt_1486705141135_0002_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://hadoop1:8088/proxy/application_1486705141135_0002/Then, click on links to logs of each attempt.
Diagnostics: Application application_1486705141135_0002 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is kadmin
main : requested yarn user is kadmin
User kadmin not found

Failing this attempt. Failing the application.
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: root.users.kadmin
	 start time: 1486706063635
	 final status: FAILED
	 tracking URL: http://hadoop1:8088/cluster/app/application_1486705141135_0002
	 user: kadmin
17/02/10 13:54:24 INFO yarn.Client: Deleting staging directory .sparkStaging/application_1486705141135_0002
17/02/10 13:54:24 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/10 13:54:25 INFO ui.SparkUI: Stopped Spark web UI at http://10.10.100.51:4040
17/02/10 13:54:25 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/02/10 13:54:25 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
17/02/10 13:54:25 INFO cluster.YarnClientSchedulerBackend: Stopped
17/02/10 13:54:25 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/02/10 13:54:25 ERROR util.Utils: Uncaught exception in thread main
java.lang.NullPointerException
	at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152)
	at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1231)
	at org.apache.spark.SparkEnv.stop(SparkEnv.scala:96)
	at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1767)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1230)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:1766)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:613)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/10 13:54:25 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
	at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
	at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/10 13:54:25 INFO storage.DiskBlockManager: Shutdown hook called
17/02/10 13:54:25 INFO util.ShutdownHookManager: Shutdown hook called
17/02/10 13:54:25 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-79d08367-6f8d-4cb3-813e-d450e90a3128/userFiles-58912a50-d060-42ec-8665-7a74c1be9a7b
17/02/10 13:54:25 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-79d08367-6f8d-4cb3-813e-d450e90a3

key poit:

main : run as user is kadmin
main : requested yarn user is kadmin
User kadmin not found

Thanks

1 ACCEPTED SOLUTION

avatar
Contributor
My problem is solved!

Thank you, spark submitted tasks, I do not need certification of the cluster, but the need for the implementation of the spark machine!

Therefore, I need to remove these two parameters:

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--executor-memory 1G \
--num-executors 1 \
--num-executors 2 \
--driver-memory 1g \
--executor-cores 1 \
#--principal kadmin/admin@NGAA.COM \
#--keytab   /home/test/sparktest/princpal/sparkjob.keytab \
/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar 12

Submit again to succeed!

17/02/10 16:18:33 INFO spark.SparkContext: Running Spark version 1.6.0
17/02/10 16:18:34 INFO spark.SecurityManager: Changing view acls to: root,hdfs
17/02/10 16:18:34 INFO spark.SecurityManager: Changing modify acls to: root,hdfs
17/02/10 16:18:34 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hdfs); users with modify permissions: Set(root, hdfs)
17/02/10 16:18:34 INFO util.Utils: Successfully started service 'sparkDriver' on port 53300.
17/02/10 16:18:35 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/02/10 16:18:35 INFO Remoting: Starting remoting
17/02/10 16:18:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.10.100.53:59243]
17/02/10 16:18:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@10.10.100.53:59243]
17/02/10 16:18:35 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 59243.
17/02/10 16:18:35 INFO spark.SparkEnv: Registering MapOutputTracker
17/02/10 16:18:35 INFO spark.SparkEnv: Registering BlockManagerMaster
17/02/10 16:18:35 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-1521d8d2-ce43-4c6e-8068-af08ed953b77
17/02/10 16:18:35 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
17/02/10 16:18:35 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/02/10 16:18:36 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/02/10 16:18:36 INFO ui.SparkUI: Started SparkUI at http://10.10.100.53:4040
17/02/10 16:18:36 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar at spark://10.10.100.53:53300/jars/spark-examples.jar with timestamp 1486714716370
17/02/10 16:18:36 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.10.100.51:8032
17/02/10 16:18:37 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
17/02/10 16:18:37 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/02/10 16:18:37 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/02/10 16:18:37 INFO yarn.Client: Setting up container launch context for our AM
17/02/10 16:18:37 INFO yarn.Client: Setting up the launch environment for our AM container
17/02/10 16:18:37 INFO yarn.Client: Preparing resources for our AM container
17/02/10 16:18:38 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://hadoop2:8020/user/hdfs/.sparkStaging/application_1486705141135_0008
17/02/10 16:18:38 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 52 for hdfs on 10.10.100.52:8020
17/02/10 16:18:39 INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop1:9083
17/02/10 16:18:39 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/02/10 16:18:39 INFO hive.metastore: Connected to metastore.
17/02/10 16:18:39 INFO hive.metastore: Closed a connection to metastore, current connections: 0
17/02/10 16:18:39 INFO yarn.Client: Uploading resource file:/tmp/spark-f6434659-beb9-437c-b233-8667c48702b9/__spark_conf__2828602694267011736.zip -> hdfs://hadoop2:8020/user/hdfs/.sparkStaging/application_1486705141135_0008/__spark_conf__2828602694267011736.zip
17/02/10 16:18:40 INFO spark.SecurityManager: Changing view acls to: root,hdfs
17/02/10 16:18:40 INFO spark.SecurityManager: Changing modify acls to: root,hdfs
17/02/10 16:18:40 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hdfs); users with modify permissions: Set(root, hdfs)
17/02/10 16:18:40 INFO yarn.Client: Submitting application 8 to ResourceManager
17/02/10 16:18:40 INFO impl.YarnClientImpl: Submitted application application_1486705141135_0008
17/02/10 16:18:41 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:41 INFO yarn.Client: 
	 client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: root.users.hdfs
	 start time: 1486714720230
	 final status: UNDEFINED
	 tracking URL: http://hadoop1:8088/proxy/application_1486705141135_0008/
	 user: hdfs
17/02/10 16:18:42 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:43 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:44 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:45 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:46 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:47 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:48 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:48 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/02/10 16:18:48 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop1, PROXY_URI_BASES -> http://hadoop1:8088/proxy/application_1486705141135_0008), /proxy/application_1486705141135_0008
17/02/10 16:18:48 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/02/10 16:18:49 INFO yarn.Client: Application report for application_1486705141135_0008 (state: RUNNING)
17/02/10 16:18:49 INFO yarn.Client: 
	 client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
	 diagnostics: N/A
	 ApplicationMaster host: 10.10.100.53
	 ApplicationMaster RPC port: 0
	 queue: root.users.hdfs
	 start time: 1486714720230
	 final status: UNDEFINED
	 tracking URL: http://hadoop1:8088/proxy/application_1486705141135_0008/
	 user: hdfs
17/02/10 16:18:49 INFO cluster.YarnClientSchedulerBackend: Application application_1486705141135_0008 has started running.
17/02/10 16:18:49 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47910.
17/02/10 16:18:49 INFO netty.NettyBlockTransferService: Server created on 47910
17/02/10 16:18:49 INFO storage.BlockManager: external shuffle service port = 7337
17/02/10 16:18:49 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/02/10 16:18:49 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.100.53:47910 with 530.3 MB RAM, BlockManagerId(driver, 10.10.100.53, 47910)
17/02/10 16:18:49 INFO storage.BlockManagerMaster: Registered BlockManager
17/02/10 16:18:49 INFO scheduler.EventLoggingListener: Logging events to hdfs://hadoop2:8020/user/spark/applicationHistory/application_1486705141135_0008
17/02/10 16:18:49 WARN spark.SparkContext: Dynamic Allocation and num executors both set, thus dynamic allocation disabled.
17/02/10 16:18:58 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (hadoop4:49020) with ID 1
17/02/10 16:18:58 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop4:48173 with 530.3 MB RAM, BlockManagerId(1, hadoop4, 48173)
17/02/10 16:19:01 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (hadoop2:52352) with ID 2
17/02/10 16:19:01 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/02/10 16:19:01 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop2:39922 with 530.3 MB RAM, BlockManagerId(2, hadoop2, 39922)
17/02/10 16:19:01 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:36
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 12 output partitions
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Missing parents: List()
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
17/02/10 16:19:01 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)
17/02/10 16:19:02 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 3.0 KB)
17/02/10 16:19:02 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.100.53:47910 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:02 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
17/02/10 16:19:02 INFO scheduler.DAGScheduler: Submitting 12 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
17/02/10 16:19:02 INFO cluster.YarnScheduler: Adding task set 0.0 with 12 tasks
17/02/10 16:19:02 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, hadoop2, partition 0,PROCESS_LOCAL, 2034 bytes)
17/02/10 16:19:02 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, hadoop4, partition 1,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:03 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop4:48173 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop2:39922 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, hadoop4, partition 2,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2231 ms on hadoop4 (1/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, hadoop2, partition 3,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, hadoop4, partition 4,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2369 ms on hadoop2 (2/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 127 ms on hadoop4 (3/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, hadoop2, partition 5,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 108 ms on hadoop2 (4/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, hadoop4, partition 6,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 114 ms on hadoop4 (5/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, hadoop2, partition 7,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 85 ms on hadoop2 (6/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, hadoop4, partition 8,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 103 ms on hadoop4 (7/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, hadoop2, partition 9,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 89 ms on hadoop2 (8/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, hadoop4, partition 10,PROCESS_LOCAL, 2039 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, hadoop2, partition 11,PROCESS_LOCAL, 2040 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 😎 in 109 ms on hadoop4 (9/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 83 ms on hadoop2 (10/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 90 ms on hadoop4 (11/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 77 ms on hadoop2 (12/12)
17/02/10 16:19:04 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.695 s
17/02/10 16:19:04 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/02/10 16:19:04 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 3.293783 s
Pi is roughly 3.1438333333333333
17/02/10 16:19:05 INFO ui.SparkUI: Stopped Spark web UI at http://10.10.100.53:4040
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Stopped
17/02/10 16:19:05 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/02/10 16:19:05 INFO storage.MemoryStore: MemoryStore cleared
17/02/10 16:19:05 INFO storage.BlockManager: BlockManager stopped
17/02/10 16:19:05 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/02/10 16:19:05 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/02/10 16:19:05 INFO spark.SparkContext: Successfully stopped SparkContext
17/02/10 16:19:05 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/02/10 16:19:05 INFO util.ShutdownHookManager: Shutdown hook called
17/02/10 16:19:05 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-f6434659-beb9-437c-b233-8667c48702b9
17/02/10 16:19:05 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.


View solution in original post

5 REPLIES 5

avatar
Super Guru

what's the principal in the keytab sparkjob.keytab? I am pretty sure its not kadmin. find using following commands on your machine.

root@venice fire-ui]# ktutil 
ktutil:  read_kt /home/test/sparktest/princpal/sparkjob.keytab
ktutil:  list

slot KVNO Principal

---- ---- ---------------------------------------------------------------------

   1    1                 <will display your principal>

   2    1                 <will display your principal>


avatar
Contributor

@mqureshi

Thanks.

The principal is kadmin,and I suspect that the yarn is missing the group.

[root@hadoop1 princpal]# klist -kt sparkjob.keytab 
Keytab name: FILE:sparkjob.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM
   3 02/06/17 19:01:40 kadmin/admin@NGAA.COM


avatar
Super Collaborator
@yang jifei

kadmin user should be available on all node managers for the job to run with kadmin user account.

avatar
Contributor

@rguruvannagari

Hi,

Spark submitted the task has been certified kadmin, do you think should be how to deal with it? Thank you

avatar
Contributor
My problem is solved!

Thank you, spark submitted tasks, I do not need certification of the cluster, but the need for the implementation of the spark machine!

Therefore, I need to remove these two parameters:

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--executor-memory 1G \
--num-executors 1 \
--num-executors 2 \
--driver-memory 1g \
--executor-cores 1 \
#--principal kadmin/admin@NGAA.COM \
#--keytab   /home/test/sparktest/princpal/sparkjob.keytab \
/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar 12

Submit again to succeed!

17/02/10 16:18:33 INFO spark.SparkContext: Running Spark version 1.6.0
17/02/10 16:18:34 INFO spark.SecurityManager: Changing view acls to: root,hdfs
17/02/10 16:18:34 INFO spark.SecurityManager: Changing modify acls to: root,hdfs
17/02/10 16:18:34 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hdfs); users with modify permissions: Set(root, hdfs)
17/02/10 16:18:34 INFO util.Utils: Successfully started service 'sparkDriver' on port 53300.
17/02/10 16:18:35 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/02/10 16:18:35 INFO Remoting: Starting remoting
17/02/10 16:18:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.10.100.53:59243]
17/02/10 16:18:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@10.10.100.53:59243]
17/02/10 16:18:35 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 59243.
17/02/10 16:18:35 INFO spark.SparkEnv: Registering MapOutputTracker
17/02/10 16:18:35 INFO spark.SparkEnv: Registering BlockManagerMaster
17/02/10 16:18:35 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-1521d8d2-ce43-4c6e-8068-af08ed953b77
17/02/10 16:18:35 INFO storage.MemoryStore: MemoryStore started with capacity 530.3 MB
17/02/10 16:18:35 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/02/10 16:18:36 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/02/10 16:18:36 INFO ui.SparkUI: Started SparkUI at http://10.10.100.53:4040
17/02/10 16:18:36 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/CDH/lib/spark/lib/spark-examples.jar at spark://10.10.100.53:53300/jars/spark-examples.jar with timestamp 1486714716370
17/02/10 16:18:36 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.10.100.51:8032
17/02/10 16:18:37 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
17/02/10 16:18:37 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/02/10 16:18:37 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/02/10 16:18:37 INFO yarn.Client: Setting up container launch context for our AM
17/02/10 16:18:37 INFO yarn.Client: Setting up the launch environment for our AM container
17/02/10 16:18:37 INFO yarn.Client: Preparing resources for our AM container
17/02/10 16:18:38 INFO yarn.YarnSparkHadoopUtil: getting token for namenode: hdfs://hadoop2:8020/user/hdfs/.sparkStaging/application_1486705141135_0008
17/02/10 16:18:38 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 52 for hdfs on 10.10.100.52:8020
17/02/10 16:18:39 INFO hive.metastore: Trying to connect to metastore with URI thrift://hadoop1:9083
17/02/10 16:18:39 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/02/10 16:18:39 INFO hive.metastore: Connected to metastore.
17/02/10 16:18:39 INFO hive.metastore: Closed a connection to metastore, current connections: 0
17/02/10 16:18:39 INFO yarn.Client: Uploading resource file:/tmp/spark-f6434659-beb9-437c-b233-8667c48702b9/__spark_conf__2828602694267011736.zip -> hdfs://hadoop2:8020/user/hdfs/.sparkStaging/application_1486705141135_0008/__spark_conf__2828602694267011736.zip
17/02/10 16:18:40 INFO spark.SecurityManager: Changing view acls to: root,hdfs
17/02/10 16:18:40 INFO spark.SecurityManager: Changing modify acls to: root,hdfs
17/02/10 16:18:40 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, hdfs); users with modify permissions: Set(root, hdfs)
17/02/10 16:18:40 INFO yarn.Client: Submitting application 8 to ResourceManager
17/02/10 16:18:40 INFO impl.YarnClientImpl: Submitted application application_1486705141135_0008
17/02/10 16:18:41 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:41 INFO yarn.Client: 
	 client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: root.users.hdfs
	 start time: 1486714720230
	 final status: UNDEFINED
	 tracking URL: http://hadoop1:8088/proxy/application_1486705141135_0008/
	 user: hdfs
17/02/10 16:18:42 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:43 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:44 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:45 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:46 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:47 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:48 INFO yarn.Client: Application report for application_1486705141135_0008 (state: ACCEPTED)
17/02/10 16:18:48 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/02/10 16:18:48 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop1, PROXY_URI_BASES -> http://hadoop1:8088/proxy/application_1486705141135_0008), /proxy/application_1486705141135_0008
17/02/10 16:18:48 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/02/10 16:18:49 INFO yarn.Client: Application report for application_1486705141135_0008 (state: RUNNING)
17/02/10 16:18:49 INFO yarn.Client: 
	 client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
	 diagnostics: N/A
	 ApplicationMaster host: 10.10.100.53
	 ApplicationMaster RPC port: 0
	 queue: root.users.hdfs
	 start time: 1486714720230
	 final status: UNDEFINED
	 tracking URL: http://hadoop1:8088/proxy/application_1486705141135_0008/
	 user: hdfs
17/02/10 16:18:49 INFO cluster.YarnClientSchedulerBackend: Application application_1486705141135_0008 has started running.
17/02/10 16:18:49 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 47910.
17/02/10 16:18:49 INFO netty.NettyBlockTransferService: Server created on 47910
17/02/10 16:18:49 INFO storage.BlockManager: external shuffle service port = 7337
17/02/10 16:18:49 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/02/10 16:18:49 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.100.53:47910 with 530.3 MB RAM, BlockManagerId(driver, 10.10.100.53, 47910)
17/02/10 16:18:49 INFO storage.BlockManagerMaster: Registered BlockManager
17/02/10 16:18:49 INFO scheduler.EventLoggingListener: Logging events to hdfs://hadoop2:8020/user/spark/applicationHistory/application_1486705141135_0008
17/02/10 16:18:49 WARN spark.SparkContext: Dynamic Allocation and num executors both set, thus dynamic allocation disabled.
17/02/10 16:18:58 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (hadoop4:49020) with ID 1
17/02/10 16:18:58 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop4:48173 with 530.3 MB RAM, BlockManagerId(1, hadoop4, 48173)
17/02/10 16:19:01 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (hadoop2:52352) with ID 2
17/02/10 16:19:01 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/02/10 16:19:01 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop2:39922 with 530.3 MB RAM, BlockManagerId(2, hadoop2, 39922)
17/02/10 16:19:01 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:36
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 12 output partitions
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Missing parents: List()
17/02/10 16:19:01 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
17/02/10 16:19:01 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)
17/02/10 16:19:02 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1202.0 B, free 3.0 KB)
17/02/10 16:19:02 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.100.53:47910 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:02 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
17/02/10 16:19:02 INFO scheduler.DAGScheduler: Submitting 12 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
17/02/10 16:19:02 INFO cluster.YarnScheduler: Adding task set 0.0 with 12 tasks
17/02/10 16:19:02 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, hadoop2, partition 0,PROCESS_LOCAL, 2034 bytes)
17/02/10 16:19:02 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, hadoop4, partition 1,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:03 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop4:48173 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop2:39922 (size: 1202.0 B, free: 530.3 MB)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, hadoop4, partition 2,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2231 ms on hadoop4 (1/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, hadoop2, partition 3,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, hadoop4, partition 4,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2369 ms on hadoop2 (2/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 127 ms on hadoop4 (3/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, hadoop2, partition 5,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 108 ms on hadoop2 (4/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, hadoop4, partition 6,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 114 ms on hadoop4 (5/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, hadoop2, partition 7,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 85 ms on hadoop2 (6/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, hadoop4, partition 8,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 103 ms on hadoop4 (7/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, hadoop2, partition 9,PROCESS_LOCAL, 2036 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 89 ms on hadoop2 (8/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, hadoop4, partition 10,PROCESS_LOCAL, 2039 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, hadoop2, partition 11,PROCESS_LOCAL, 2040 bytes)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 😎 in 109 ms on hadoop4 (9/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 83 ms on hadoop2 (10/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 90 ms on hadoop4 (11/12)
17/02/10 16:19:04 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 77 ms on hadoop2 (12/12)
17/02/10 16:19:04 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 2.695 s
17/02/10 16:19:04 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/02/10 16:19:04 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 3.293783 s
Pi is roughly 3.1438333333333333
17/02/10 16:19:05 INFO ui.SparkUI: Stopped Spark web UI at http://10.10.100.53:4040
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
17/02/10 16:19:05 INFO cluster.YarnClientSchedulerBackend: Stopped
17/02/10 16:19:05 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/02/10 16:19:05 INFO storage.MemoryStore: MemoryStore cleared
17/02/10 16:19:05 INFO storage.BlockManager: BlockManager stopped
17/02/10 16:19:05 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/02/10 16:19:05 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/02/10 16:19:05 INFO spark.SparkContext: Successfully stopped SparkContext
17/02/10 16:19:05 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/02/10 16:19:05 INFO util.ShutdownHookManager: Shutdown hook called
17/02/10 16:19:05 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-f6434659-beb9-437c-b233-8667c48702b9
17/02/10 16:19:05 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.