Spark Command: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre//bin/java -Dhdp.version=2.6.5.0-292 -cp /usr/hdp/current/spark2-thriftserver/conf/:/usr/hdp/current/spark2-thriftserver/jars/*:/usr/hdp/2.6.5.0-292/hadoop/conf/ -Xmx1024m org.apache.spark.deploy.SparkSubmit --properties-file /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-sparkconf.conf --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC/ODBC Server spark-internal ======================================== Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead. 19/06/07 11:21:44 INFO HiveThriftServer2: Started daemon with process name: 27539@lhdcsi02v.production.local 19/06/07 11:21:44 INFO SignalUtils: Registered signal handler for TERM 19/06/07 11:21:45 INFO SignalUtils: Registered signal handler for HUP 19/06/07 11:21:45 INFO SignalUtils: Registered signal handler for INT 19/06/07 11:21:45 INFO HiveThriftServer2: Starting SparkContext 19/06/07 11:21:45 INFO SparkContext: Running Spark version 2.3.0.2.6.5.0-292 19/06/07 11:21:45 INFO SparkContext: Submitted application: Thrift JDBC/ODBC Server 19/06/07 11:21:45 INFO SecurityManager: Changing view acls to: hive 19/06/07 11:21:45 INFO SecurityManager: Changing modify acls to: hive 19/06/07 11:21:45 INFO SecurityManager: Changing view acls groups to: 19/06/07 11:21:45 INFO SecurityManager: Changing modify acls groups to: 19/06/07 11:21:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hive); groups with view permissions: Set(); users with modify permissions: Set(hive); groups with modify permissions: Set() 19/06/07 11:21:45 INFO Utils: Successfully started service 'sparkDriver' on port 34793. 19/06/07 11:21:45 INFO SparkEnv: Registering MapOutputTracker 19/06/07 11:21:46 INFO SparkEnv: Registering BlockManagerMaster 19/06/07 11:21:46 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/06/07 11:21:46 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/06/07 11:21:46 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-1cfbe6c6-107f-442a-903d-ead26b86d642 19/06/07 11:21:46 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/06/07 11:21:46 INFO SparkEnv: Registering OutputCommitCoordinator 19/06/07 11:21:46 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 19/06/07 11:21:46 INFO Utils: Successfully started service 'SparkUI' on port 4041. 19/06/07 11:21:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://lhdcsi02v.production.local:4041 19/06/07 11:21:46 INFO FairSchedulableBuilder: Creating Fair Scheduler pools from /usr/hdp/current/spark2-thriftserver/conf/spark-thrift-fairscheduler.xml 19/06/07 11:21:46 INFO FairSchedulableBuilder: Created pool: default, schedulingMode: FAIR, minShare: 2, weight: 1 19/06/07 11:21:46 INFO Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/06/07 11:21:48 INFO Client: Attempting to login to the Kerberos using principal: hive/lhdcsi02v.production.local@production.local and keytab: /etc/security/keytabs/hive.service.keytab 19/06/07 11:21:48 INFO RMProxy: Connecting to ResourceManager at lhdcsi04v.production.local/10.237.14.24:8032 19/06/07 11:21:48 INFO Client: Requesting a new application from cluster with 3 NodeManagers 19/06/07 11:21:48 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container) 19/06/07 11:21:48 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 19/06/07 11:21:48 INFO Client: Setting up container launch context for our AM 19/06/07 11:21:48 INFO Client: Setting up the launch environment for our AM container 19/06/07 11:21:48 INFO Client: Credentials file set to: credentials-1fcc15da-a7ad-4395-bd50-930d5c62ce11 19/06/07 11:21:48 INFO Client: Preparing resources for our AM container 19/06/07 11:21:48 INFO HadoopFSDelegationTokenProvider: getting token for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_117970512_1, ugi=hive/lhdcsi02v.production.local@production.local (auth:KERBEROS)]] 19/06/07 11:21:48 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 5199 for hive on 10.237.14.24:8020 19/06/07 11:21:48 INFO HadoopFSDelegationTokenProvider: getting token for: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_117970512_1, ugi=hive/lhdcsi02v.production.local@production.local (auth:KERBEROS)]] 19/06/07 11:21:48 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 5200 for hive on 10.237.14.24:8020 19/06/07 11:21:48 INFO HadoopFSDelegationTokenProvider: Renewal interval is 86400048 for token HDFS_DELEGATION_TOKEN 19/06/07 11:21:51 INFO Client: To enable the AM to login from keytab, credentials are being copied over to the AM via the YARN Secure Distributed Cache. 19/06/07 11:21:51 INFO Client: Uploading resource file:/etc/security/keytabs/hive.service.keytab -> hdfs://lhdcsi04v.production.local:8020/user/hive/.sparkStaging/application_1559901109553_0003/hive.service.keytab 19/06/07 11:21:51 INFO Client: Use hdfs cache file as spark.yarn.archive for HDP, hdfsCacheFile:hdfs://lhdcsi04v.production.local:8020/hdp/apps/2.6.5.0-292/spark2/spark2-hdp-yarn-archive.tar.gz 19/06/07 11:21:51 INFO Client: Source and destination file systems are the same. Not copying hdfs://lhdcsi04v.production.local:8020/hdp/apps/2.6.5.0-292/spark2/spark2-hdp-yarn-archive.tar.gz 19/06/07 11:21:51 INFO Client: Uploading resource file:/tmp/spark-341d0225-e9e8-482d-9117-0e4d4caeaa9b/__spark_conf__5788453412365138181.zip -> hdfs://lhdcsi04v.production.local:8020/user/hive/.sparkStaging/application_1559901109553_0003/__spark_conf__.zip 19/06/07 11:21:52 INFO SecurityManager: Changing view acls to: hive 19/06/07 11:21:52 INFO SecurityManager: Changing modify acls to: hive 19/06/07 11:21:52 INFO SecurityManager: Changing view acls groups to: 19/06/07 11:21:52 INFO SecurityManager: Changing modify acls groups to: 19/06/07 11:21:52 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hive); groups with view permissions: Set(); users with modify permissions: Set(hive); groups with modify permissions: Set() 19/06/07 11:21:52 INFO Client: Submitting application application_1559901109553_0003 to ResourceManager 19/06/07 11:21:52 INFO YarnClientImpl: Submitted application application_1559901109553_0003 19/06/07 11:21:52 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1559901109553_0003 and attemptId None 19/06/07 11:21:53 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:53 INFO Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: AM container is launched, waiting for AM container to Register with RM ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.hive start time: 1559902912121 final status: UNDEFINED tracking URL: http://lhdcsi04v.production.local:8088/proxy/application_1559901109553_0003/ user: hive 19/06/07 11:21:54 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:55 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:56 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:57 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:58 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:21:59 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:00 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:01 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:02 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:03 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:04 INFO Client: Application report for application_1559901109553_0003 (state: ACCEPTED) 19/06/07 11:22:04 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> lhdcsi04v.production.local, PROXY_URI_BASES -> http://lhdcsi04v.production.local:8088/proxy/application_1559901109553_0003), /proxy/application_1559901109553_0003 19/06/07 11:22:04 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 19/06/07 11:22:05 INFO Client: Application report for application_1559901109553_0003 (state: RUNNING) 19/06/07 11:22:05 INFO Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: N/A ApplicationMaster host: 10.237.14.26 ApplicationMaster RPC port: 0 queue: root.hive start time: 1559902912121 final status: UNDEFINED tracking URL: http://lhdcsi04v.production.local:8088/proxy/application_1559901109553_0003/ user: hive 19/06/07 11:22:05 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM) 19/06/07 11:22:05 INFO YarnClientSchedulerBackend: Application application_1559901109553_0003 has started running. 19/06/07 11:22:05 INFO CredentialUpdater: Scheduling credentials refresh from HDFS in 69103620 ms. 19/06/07 11:22:05 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39630. 19/06/07 11:22:05 INFO NettyBlockTransferService: Server created on lhdcsi02v.production.local:39630 19/06/07 11:22:05 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/06/07 11:22:05 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, lhdcsi02v.production.local, 39630, None) 19/06/07 11:22:05 INFO BlockManagerMasterEndpoint: Registering block manager lhdcsi02v.production.local:39630 with 366.3 MB RAM, BlockManagerId(driver, lhdcsi02v.production.local, 39630, None) 19/06/07 11:22:05 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, lhdcsi02v.production.local, 39630, None) 19/06/07 11:22:05 INFO BlockManager: external shuffle service port = 7447 19/06/07 11:22:05 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, lhdcsi02v.production.local, 39630, None) 19/06/07 11:22:06 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/application_1559901109553_0003 19/06/07 11:22:06 INFO Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/06/07 11:22:06 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/06/07 11:22:07 INFO SharedState: loading hive config file: file:/etc/spark2/2.6.5.0-292/0/hive-site.xml 19/06/07 11:22:07 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hive/spark-warehouse'). 19/06/07 11:22:07 INFO SharedState: Warehouse path is 'file:/home/hive/spark-warehouse'. 19/06/07 11:22:07 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 19/06/07 11:22:08 INFO metastore: Trying to connect to metastore with URI thrift://lhdcsi02v.production.local:9083 19/06/07 11:22:08 INFO metastore: Connected to metastore. 19/06/07 11:22:09 INFO SessionState: Created local directory: /tmp/2fe4203b-335b-4114-a239-fd76171d13e7_resources 19/06/07 11:22:09 INFO SessionState: Created HDFS directory: /tmp/hive/hive/2fe4203b-335b-4114-a239-fd76171d13e7 19/06/07 11:22:09 INFO SessionState: Created local directory: /tmp/hive/2fe4203b-335b-4114-a239-fd76171d13e7 19/06/07 11:22:09 INFO SessionState: Created HDFS directory: /tmp/hive/hive/2fe4203b-335b-4114-a239-fd76171d13e7/_tmp_space.db 19/06/07 11:22:09 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/home/hive/spark-warehouse 19/06/07 11:22:09 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/06/07 11:22:09 INFO HiveUtils: Initializing execution hive, version 1.2.1 19/06/07 11:22:10 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 19/06/07 11:22:10 INFO ObjectStore: ObjectStore, initialize called 19/06/07 11:22:10 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 19/06/07 11:22:10 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored 19/06/07 11:22:12 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 19/06/07 11:22:14 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 19/06/07 11:22:14 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 19/06/07 11:22:15 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 19/06/07 11:22:15 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 19/06/07 11:22:15 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY 19/06/07 11:22:15 INFO ObjectStore: Initialized ObjectStore 19/06/07 11:22:15 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 19/06/07 11:22:15 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 19/06/07 11:22:16 INFO HiveMetaStore: Added admin role in metastore 19/06/07 11:22:16 INFO HiveMetaStore: Added public role in metastore 19/06/07 11:22:16 INFO HiveMetaStore: No user is added in admin role, since config is empty 19/06/07 11:22:16 INFO HiveMetaStore: 0: get_all_databases 19/06/07 11:22:16 INFO audit: ugi=hive/lhdcsi02v.production.local@production.local ip=unknown-ip-addr cmd=get_all_databases 19/06/07 11:22:16 INFO HiveMetaStore: 0: get_functions: db=default pat=* 19/06/07 11:22:16 INFO audit: ugi=hive/lhdcsi02v.production.local@production.local ip=unknown-ip-addr cmd=get_functions: db=default pat=* 19/06/07 11:22:16 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 19/06/07 11:22:16 INFO SessionState: Created local directory: /tmp/b74178ce-416f-413f-9f39-f6e237c36759_resources 19/06/07 11:22:16 INFO SessionState: Created HDFS directory: /tmp/hive/hive/b74178ce-416f-413f-9f39-f6e237c36759 19/06/07 11:22:16 INFO SessionState: Created local directory: /tmp/hive/b74178ce-416f-413f-9f39-f6e237c36759 19/06/07 11:22:16 INFO SessionState: Created HDFS directory: /tmp/hive/hive/b74178ce-416f-413f-9f39-f6e237c36759/_tmp_space.db 19/06/07 11:22:16 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is file:/home/hive/spark-warehouse 19/06/07 11:22:16 INFO UserGroupInformation: Login successful for user hive/lhdcsi02v.production.local@production.local using keytab file /etc/security/keytabs/hive.service.keytab 19/06/07 11:22:16 INFO SessionManager: Operation log root directory is created: /tmp/hive/operation_logs 19/06/07 11:22:16 INFO SessionManager: HiveServer2: Background operation thread pool size: 100 19/06/07 11:22:16 INFO SessionManager: HiveServer2: Background operation thread wait queue size: 100 19/06/07 11:22:16 INFO SessionManager: HiveServer2: Background operation thread keepalive time: 10 seconds 19/06/07 11:22:16 INFO AbstractService: Service:OperationManager is inited. 19/06/07 11:22:16 INFO AbstractService: Service:SessionManager is inited. 19/06/07 11:22:16 INFO AbstractService: Service: CLIService is inited. 19/06/07 11:22:16 INFO AbstractService: Service:ThriftBinaryCLIService is inited. 19/06/07 11:22:16 INFO AbstractService: Service: HiveServer2 is inited. 19/06/07 11:22:16 INFO AbstractService: Service:OperationManager is started. 19/06/07 11:22:16 INFO AbstractService: Service:SessionManager is started. 19/06/07 11:22:16 INFO AbstractService: Service:CLIService is started. 19/06/07 11:22:16 INFO ObjectStore: ObjectStore, initialize called 19/06/07 11:22:16 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 19/06/07 11:22:16 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY 19/06/07 11:22:16 INFO ObjectStore: Initialized ObjectStore 19/06/07 11:22:16 INFO HiveMetaStore: 0: get_databases: default 19/06/07 11:22:16 INFO audit: ugi=hive/lhdcsi02v.production.local@production.local ip=unknown-ip-addr cmd=get_databases: default 19/06/07 11:22:16 INFO HiveMetaStore: 0: Shutting down the object store... 19/06/07 11:22:16 INFO audit: ugi=hive/lhdcsi02v.production.local@production.local ip=unknown-ip-addr cmd=Shutting down the object store... 19/06/07 11:22:16 INFO HiveMetaStore: 0: Metastore shutdown complete. 19/06/07 11:22:16 INFO audit: ugi=hive/lhdcsi02v.production.local@production.local ip=unknown-ip-addr cmd=Metastore shutdown complete. 19/06/07 11:22:16 INFO AbstractService: Service:ThriftBinaryCLIService is started. 19/06/07 11:22:16 INFO AbstractService: Service:HiveServer2 is started. 19/06/07 11:22:16 INFO HiveThriftServer2: HiveThriftServer2 started 19/06/07 11:22:16 INFO UserGroupInformation: Login successful for user hive/lhdcsi02v.production.local@production.local using keytab file /etc/security/keytabs/hive.service.keytab 19/06/07 11:22:16 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService java.lang.NoSuchMethodError: org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.startDelegationTokenSecretManager(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/Object;Lorg/apache/hadoop/hive/thrift/HadoopThriftAuthBridge$Server$ServerMode;)V at org.apache.hive.service.auth.HiveAuthFactory.(HiveAuthFactory.java:125) at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:57) at java.lang.Thread.run(Thread.java:748) 19/06/07 11:22:16 INFO HiveServer2: Shutting down HiveServer2 19/06/07 11:22:16 INFO AbstractService: Service:ThriftBinaryCLIService is stopped. 19/06/07 11:22:16 INFO AbstractService: Service:OperationManager is stopped. 19/06/07 11:22:16 INFO AbstractService: Service:SessionManager is stopped. 19/06/07 11:22:16 INFO SparkUI: Stopped Spark web UI at http://lhdcsi02v.production.local:4041 19/06/07 11:22:26 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) 19/06/07 11:22:26 ERROR Utils: Uncaught exception in thread pool-1-thread-1 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.spark.scheduler.AsyncEventQueue.stop(AsyncEventQueue.scala:133) at org.apache.spark.scheduler.LiveListenerBus$$anonfun$stop$1.apply(LiveListenerBus.scala:219) at org.apache.spark.scheduler.LiveListenerBus$$anonfun$stop$1.apply(LiveListenerBus.scala:219) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.spark.scheduler.LiveListenerBus.stop(LiveListenerBus.scala:219) at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1922) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at org.apache.spark.SparkContext.stop(SparkContext.scala:1921) at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.stop(SparkSQLEnv.scala:66) at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$$anonfun$main$1.apply$mcV$sp(HiveThriftServer2.scala:82) at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/06/07 11:22:26 INFO AbstractService: Service:CLIService is stopped. 19/06/07 11:22:26 INFO AbstractService: Service:HiveServer2 is stopped.