Log Type: stderr Log Upload Time: Thu Jan 24 12:12:21 +0800 2019 Log Length: 232245 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/nifi/filecache/659/__spark_libs__8596105546787482643.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/01/24 12:07:22 INFO SignalUtils: Registered signal handler for TERM 19/01/24 12:07:22 INFO SignalUtils: Registered signal handler for HUP 19/01/24 12:07:22 INFO SignalUtils: Registered signal handler for INT 19/01/24 12:07:22 INFO SecurityManager: Changing view acls to: yarn,nifi 19/01/24 12:07:22 INFO SecurityManager: Changing modify acls to: yarn,nifi 19/01/24 12:07:22 INFO SecurityManager: Changing view acls groups to: 19/01/24 12:07:22 INFO SecurityManager: Changing modify acls groups to: 19/01/24 12:07:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, nifi); groups with view permissions: Set(); users with modify permissions: Set(yarn, nifi); groups with modify permissions: Set() 19/01/24 12:07:22 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 19/01/24 12:07:23 INFO ApplicationMaster: Preparing Local resources 19/01/24 12:07:23 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 19/01/24 12:07:23 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1547783609774_0221_000001 19/01/24 12:07:23 INFO ApplicationMaster: Starting the user application in a separate Thread 19/01/24 12:07:23 INFO ApplicationMaster: Waiting for spark context initialization... 19/01/24 12:07:24 INFO AnnotationConfigApplicationContext: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@43fcf4ba: startup date [Thu Jan 24 12:07:24 CST 2019]; root of context hierarchy 19/01/24 12:07:24 INFO AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring 19/01/24 12:07:24 INFO SparkContext: Running Spark version 2.3.1.3.0.1.0-187 19/01/24 12:07:24 INFO SparkContext: Submitted application: Profiler 19/01/24 12:07:24 INFO SecurityManager: Changing view acls to: yarn,nifi 19/01/24 12:07:24 INFO SecurityManager: Changing modify acls to: yarn,nifi 19/01/24 12:07:24 INFO SecurityManager: Changing view acls groups to: 19/01/24 12:07:24 INFO SecurityManager: Changing modify acls groups to: 19/01/24 12:07:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, nifi); groups with view permissions: Set(); users with modify permissions: Set(yarn, nifi); groups with modify permissions: Set() 19/01/24 12:07:24 INFO Utils: Successfully started service 'sparkDriver' on port 34038. 19/01/24 12:07:24 INFO SparkEnv: Registering MapOutputTracker 19/01/24 12:07:24 INFO SparkEnv: Registering BlockManagerMaster 19/01/24 12:07:24 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/01/24 12:07:24 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/01/24 12:07:24 INFO DiskBlockManager: Created local directory at /hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/blockmgr-ed1d606e-8100-4b02-ad5b-675ca53bf74b 19/01/24 12:07:24 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/01/24 12:07:24 INFO SparkEnv: Registering OutputCommitCoordinator 19/01/24 12:07:24 INFO log: Logging initialized @2928ms 19/01/24 12:07:24 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /jobs, /jobs/json, /jobs/job, /jobs/job/json, /stages, /stages/json, /stages/stage, /stages/stage/json, /stages/pool, /stages/pool/json, /storage, /storage/json, /storage/rdd, /storage/rdd/json, /environment, /environment/json, /executors, /executors/json, /executors/threadDump, /executors/threadDump/json, /static, /, /api, /jobs/job/kill, /stages/stage/kill. 19/01/24 12:07:24 INFO Server: jetty-9.3.z-SNAPSHOT, build timestamp: 2018-06-06T01:11:56+08:00, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827 19/01/24 12:07:24 INFO Server: Started @3009ms 19/01/24 12:07:25 INFO AbstractConnector: Started ServerConnector@17bc5a91{HTTP/1.1,[http/1.1]}{0.0.0.0:44759} 19/01/24 12:07:25 INFO Utils: Successfully started service 'SparkUI' on port 44759. 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@7020f669{/jobs,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5ab95880{/jobs/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@10f13315{/jobs/job,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@58dc70b2{/jobs/job/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@48b14e71{/stages,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@51c1a5e9{/stages/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@37bacfbb{/stages/stage,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@126a6ffd{/stages/stage/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@73ccbd1c{/stages/pool,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3c5bfbeb{/stages/pool/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@31a7e8b7{/storage,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@70f37cf9{/storage/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@66c10ba6{/storage/rdd,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@5a62036{/storage/rdd/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@b4a8265{/environment,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@60b53341{/environment/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@e4798ca{/executors,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3a54aea4{/executors/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@65d24a21{/executors/threadDump,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2c3452e1{/executors/threadDump/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1e870e65{/static,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@1e0bdbdf{/,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@749f6e5c{/api,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@76565066{/jobs/job/kill,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@46bf31ed{/stages/stage/kill,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://ambari-0004.test.com:44759 19/01/24 12:07:25 INFO YarnClusterScheduler: Created YarnClusterScheduler 19/01/24 12:07:25 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1547783609774_0221 and attemptId Some(appattempt_1547783609774_0221_000001) 19/01/24 12:07:25 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35778. 19/01/24 12:07:25 INFO NettyBlockTransferService: Server created on ambari-0004.test.com:35778 19/01/24 12:07:25 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/01/24 12:07:25 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, ambari-0004.test.com, 35778, None) 19/01/24 12:07:25 INFO BlockManagerMasterEndpoint: Registering block manager ambari-0004.test.com:35778 with 366.3 MB RAM, BlockManagerId(driver, ambari-0004.test.com, 35778, None) 19/01/24 12:07:25 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, ambari-0004.test.com, 35778, None) 19/01/24 12:07:25 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, ambari-0004.test.com, 35778, None) 19/01/24 12:07:25 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json. 19/01/24 12:07:25 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@13abbcf2{/metrics/json,null,AVAILABLE,@Spark} 19/01/24 12:07:25 INFO ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}{{PWD}}/__spark_conf__{{PWD}}/__spark_libs__/*$HADOOP_CONF_DIR/usr/hdp/3.0.1.0-187/hadoop/*/usr/hdp/3.0.1.0-187/hadoop/lib/*/usr/hdp/current/hadoop-hdfs-client/*/usr/hdp/current/hadoop-hdfs-client/lib/*/usr/hdp/current/hadoop-yarn-client/*/usr/hdp/current/hadoop-yarn-client/lib/*$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/3.0.1.0-187/hadoop/lib/hadoop-lzo-0.6.0.3.0.1.0-187.jar:/etc/hadoop/conf/secure{{PWD}}/__spark_conf__/__hadoop_conf__ SPARK_YARN_STAGING_DIR -> hdfs://ambari-0002.test.com:8020/user/nifi/.sparkStaging/application_1547783609774_0221 SPARK_USER -> nifi command: {{JAVA_HOME}}/bin/java \ -server \ -Xmx512m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.network.timeout=120s' \ -Dspark.yarn.app.container.log.dir= \ -XX:OnOutOfMemoryError='kill %p' \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@ambari-0004.test.com:34038 \ --executor-id \ \ --hostname \ \ --cores \ 1 \ --app-id \ application_1547783609774_0221 \ --user-class-path \ file:$PWD/__app__.jar \ 1>/stdout \ 2>/stderr resources: __app__.jar -> resource { scheme: "hdfs" host: "ambari-0002.test.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1547783609774_0221/kylo-spark-job-profiler-jar-with-dependencies.jar" } size: 18639280 timestamp: 1548302839719 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "ambari-0002.test.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1547783609774_0221/__spark_conf__.zip" } size: 307679 timestamp: 1548302839992 type: ARCHIVE visibility: PRIVATE __spark_libs__ -> resource { scheme: "hdfs" host: "ambari-0002.test.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1547783609774_0221/__spark_libs__8596105546787482643.zip" } size: 311283785 timestamp: 1548302839520 type: ARCHIVE visibility: PRIVATE hive-site.xml -> resource { scheme: "hdfs" host: "ambari-0002.test.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1547783609774_0221/hive-site.xml" } size: 23787 timestamp: 1548302839797 type: FILE visibility: PRIVATE ffftest_copy_copy_field_policy.json -> resource { scheme: "hdfs" host: "ambari-0002.test.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1547783609774_0221/ffftest_copy_copy_field_policy.json" } size: 2140 timestamp: 1548302839761 type: FILE visibility: PRIVATE =============================================================================== 19/01/24 12:07:25 INFO RMProxy: Connecting to ResourceManager at ambari-0002.test.com/192.168.1.101:8030 19/01/24 12:07:25 INFO YarnRMClient: Registering the ApplicationMaster 19/01/24 12:07:25 INFO Configuration: found resource resource-types.xml at file:/etc/hadoop/3.0.1.0-187/0/resource-types.xml 19/01/24 12:07:25 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@ambari-0004.test.com:34038) 19/01/24 12:07:25 INFO YarnAllocator: Will request 1 executor container(s), each with 1 core(s) and 896 MB memory (including 384 MB of overhead) 19/01/24 12:07:25 INFO YarnAllocator: Submitted 1 unlocalized container requests. 19/01/24 12:07:25 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 19/01/24 12:07:25 INFO YarnAllocator: Launching container container_e02_1547783609774_0221_01_000002 on host ambari-0004.test.com for executor with ID 1 19/01/24 12:07:25 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/01/24 12:07:27 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.1.192:49852) with ID 1 19/01/24 12:07:27 INFO BlockManagerMasterEndpoint: Registering block manager ambari-0004.test.com:35356 with 93.3 MB RAM, BlockManagerId(1, ambari-0004.test.com, 35356, None) 19/01/24 12:07:27 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/01/24 12:07:27 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done 19/01/24 12:07:27 INFO SharedState: loading hive config file: file:/hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/container_e02_1547783609774_0221_01_000001/hive-site.xml 19/01/24 12:07:28 INFO SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/warehouse/tablespace/managed/hive'). 19/01/24 12:07:28 INFO SharedState: Warehouse path is '/warehouse/tablespace/managed/hive'. 19/01/24 12:07:28 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL. 19/01/24 12:07:28 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@739525d0{/SQL,null,AVAILABLE,@Spark} 19/01/24 12:07:28 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/json. 19/01/24 12:07:28 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@2ccdeb8b{/SQL/json,null,AVAILABLE,@Spark} 19/01/24 12:07:28 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution. 19/01/24 12:07:28 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@3bec92ae{/SQL/execution,null,AVAILABLE,@Spark} 19/01/24 12:07:28 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /SQL/execution/json. 19/01/24 12:07:28 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@37621cd1{/SQL/execution/json,null,AVAILABLE,@Spark} 19/01/24 12:07:28 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /static/sql. 19/01/24 12:07:28 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@667e2577{/static/sql,null,AVAILABLE,@Spark} 19/01/24 12:07:28 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/01/24 12:07:28 INFO ProfilerArguments: Running Spark Profiler with the following command line 6 args (comma separated): table,tests.ffftest_copy_copy_valid,10,tests.ffftest_copy_copy_profile,/tmp/kylo-nifi/spark/tests/ffftest_copy_copy/1548302384572/ffftest_copy_copy_field_policy.json,1548302384572 19/01/24 12:07:28 INFO FieldPolicyLoader: Loading Field Policy JSON file at /tmp/kylo-nifi/spark/tests/ffftest_copy_copy/1548302384572/ffftest_copy_copy_field_policy.json 19/01/24 12:07:28 INFO FieldPolicyLoader: Couldn't find field policy file at /tmp/kylo-nifi/spark/tests/ffftest_copy_copy/1548302384572/ffftest_copy_copy_field_policy.json will check classpath. 19/01/24 12:07:28 INFO FieldPoliciesJsonTransformer: Augmenting partition column validation 19/01/24 12:07:28 INFO FieldPoliciesJsonTransformer: Transformed UI Policies to Field Policies. Total Validation Policies: 0, Total Standardization Policies: 0 19/01/24 12:07:28 INFO FieldPolicyLoader: Finished building field policies for file: ./ffftest_copy_copy_field_policy.json with entity that has 13 fields 19/01/24 12:07:28 INFO Profiler: [PROFILER-INFO] Analyzing profile statistics for: [select `cc`,`country`,`birthdate`,`comments`,`gender`,`registration_dttm`,`last_name`,`ip_address`,`salary`,`title`,`id`,`first_name`,`email` where `processing_dttm` = "1548302384572"] 19/01/24 12:07:28 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.materializedview.rewriting.incremental does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.server2.webui.cors.allowed.headers does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.load.data.owner does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.execution.mode does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.service.metrics.codahale.reporter.classes does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.strict.managed.tables does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.create.as.insert.only does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.metastore.db.type does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.tez.cartesian-product.enabled does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.metastore.warehouse.external.dir does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.heapsize does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.server2.webui.enable.cors does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.metastore.transactional.event.listeners does not exist 19/01/24 12:07:29 WARN HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist 19/01/24 12:07:29 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:07:29 INFO metastore: Connected to metastore. 19/01/24 12:07:35 INFO SessionState: Created local directory: /hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/container_e02_1547783609774_0221_01_000001/tmp/yarn 19/01/24 12:07:35 INFO SessionState: Created local directory: /hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/container_e02_1547783609774_0221_01_000001/tmp/3a457ea0-da42-4c3d-9328-e87fda10c4d2_resources 19/01/24 12:07:35 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/3a457ea0-da42-4c3d-9328-e87fda10c4d2 19/01/24 12:07:35 INFO SessionState: Created local directory: /hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/container_e02_1547783609774_0221_01_000001/tmp/yarn/3a457ea0-da42-4c3d-9328-e87fda10c4d2 19/01/24 12:07:35 INFO SessionState: Created HDFS directory: /tmp/hive/nifi/3a457ea0-da42-4c3d-9328-e87fda10c4d2/_tmp_space.db 19/01/24 12:07:35 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is /warehouse/tablespace/managed/hive 19/01/24 12:07:37 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 499.9 KB, free 365.8 MB) 19/01/24 12:07:38 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 58.5 KB, free 365.8 MB) 19/01/24 12:07:38 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ambari-0004.test.com:35778 (size: 58.5 KB, free: 366.2 MB) 19/01/24 12:07:38 INFO SparkContext: Created broadcast 0 from 19/01/24 12:07:38 INFO SQLStdHiveAccessController: Created SQLStdHiveAccessController for session context : HiveAuthzSessionContext [sessionString=3a457ea0-da42-4c3d-9328-e87fda10c4d2, clientType=HIVECLI] 19/01/24 12:07:38 INFO metastore: Mestastore configuration hive.metastore.filter.hook changed from org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook 19/01/24 12:07:38 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:07:38 INFO metastore: Connected to metastore. 19/01/24 12:07:38 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:07:38 INFO metastore: Connected to metastore. 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.materializedview.rewriting.incremental does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.server2.webui.cors.allowed.headers does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.tez.bucket.pruning does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.load.data.owner does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.execution.mode does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.service.metrics.codahale.reporter.classes does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.strict.managed.tables does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.create.as.insert.only does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.metastore.db.type does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.tez.cartesian-product.enabled does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.metastore.warehouse.external.dir does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.server2.webui.use.ssl does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.heapsize does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.server2.webui.port does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.driver.parallel.compilation does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.optimize.dynamic.partition.hashjoin does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.server2.webui.enable.cors does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.txn.strict.locking.mode does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.metastore.transactional.event.listeners does not exist 19/01/24 12:07:38 WARN HiveConf: HiveConf of name hive.tez.input.generate.consistent.splits does not exist 19/01/24 12:07:38 INFO PerfLogger: 19/01/24 12:07:38 INFO deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 19/01/24 12:07:38 INFO OrcInputFormat: FooterCacheHitRatio: 0/0 19/01/24 12:07:38 INFO PerfLogger: 19/01/24 12:07:38 WARN ClosureCleaner: Expected a closure; got com.thinkbiganalytics.spark.dataprofiler.function.PartitionLevelModels 19/01/24 12:07:38 INFO SparkContext: Starting job: isEmpty at StandardProfiler.scala:45 19/01/24 12:07:38 INFO DAGScheduler: Registering RDD 7 (flatMap at StandardProfiler.scala:40) 19/01/24 12:07:38 INFO DAGScheduler: Got job 0 (isEmpty at StandardProfiler.scala:45) with 1 output partitions 19/01/24 12:07:38 INFO DAGScheduler: Final stage: ResultStage 1 (isEmpty at StandardProfiler.scala:45) 19/01/24 12:07:38 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 19/01/24 12:07:38 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 19/01/24 12:07:38 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[7] at flatMap at StandardProfiler.scala:40), which has no missing parents 19/01/24 12:07:38 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 20.6 KB, free 365.7 MB) 19/01/24 12:07:38 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 9.5 KB, free 365.7 MB) 19/01/24 12:07:38 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ambari-0004.test.com:35778 (size: 9.5 KB, free: 366.2 MB) 19/01/24 12:07:38 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:38 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[7] at flatMap at StandardProfiler.scala:40) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:38 INFO YarnClusterScheduler: Adding task set 0.0 with 2 tasks 19/01/24 12:07:38 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8111 bytes) 19/01/24 12:07:39 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ambari-0004.test.com:35356 (size: 9.5 KB, free: 93.3 MB) 19/01/24 12:07:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ambari-0004.test.com:35356 (size: 58.5 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8111 bytes) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3333 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 73 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:42 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 19/01/24 12:07:42 INFO DAGScheduler: ShuffleMapStage 0 (flatMap at StandardProfiler.scala:40) finished in 3.510 s 19/01/24 12:07:42 INFO DAGScheduler: looking for newly runnable stages 19/01/24 12:07:42 INFO DAGScheduler: running: Set() 19/01/24 12:07:42 INFO DAGScheduler: waiting: Set(ResultStage 1) 19/01/24 12:07:42 INFO DAGScheduler: failed: Set() 19/01/24 12:07:42 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[9] at mapPartitions at StandardProfiler.scala:44), which has no missing parents 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 6.7 KB, free 365.7 MB) 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.4 KB, free 365.7 MB) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on ambari-0004.test.com:35778 (size: 3.4 KB, free: 366.2 MB) 19/01/24 12:07:42 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:42 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at mapPartitions at StandardProfiler.scala:44) (first 15 tasks are for partitions Vector(0)) 19/01/24 12:07:42 INFO YarnClusterScheduler: Adding task set 1.0 with 1 tasks 19/01/24 12:07:42 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 7638 bytes) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on ambari-0004.test.com:35356 (size: 3.4 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 192.168.1.192:49852 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 146 ms on ambari-0004.test.com (executor 1) (1/1) 19/01/24 12:07:42 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 19/01/24 12:07:42 INFO DAGScheduler: ResultStage 1 (isEmpty at StandardProfiler.scala:45) finished in 0.158 s 19/01/24 12:07:42 INFO DAGScheduler: Job 0 finished: isEmpty at StandardProfiler.scala:45, took 3.746631 s 19/01/24 12:07:42 INFO SparkContext: Starting job: reduce at StandardProfiler.scala:46 19/01/24 12:07:42 INFO DAGScheduler: Got job 1 (reduce at StandardProfiler.scala:46) with 2 output partitions 19/01/24 12:07:42 INFO DAGScheduler: Final stage: ResultStage 3 (reduce at StandardProfiler.scala:46) 19/01/24 12:07:42 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 2) 19/01/24 12:07:42 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:42 INFO DAGScheduler: Submitting ResultStage 3 (MapPartitionsRDD[9] at mapPartitions at StandardProfiler.scala:44), which has no missing parents 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 6.6 KB, free 365.7 MB) 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 3.3 KB, free 365.7 MB) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ambari-0004.test.com:35778 (size: 3.3 KB, free: 366.2 MB) 19/01/24 12:07:42 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:42 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 3 (MapPartitionsRDD[9] at mapPartitions at StandardProfiler.scala:44) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:42 INFO YarnClusterScheduler: Adding task set 3.0 with 2 tasks 19/01/24 12:07:42 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 3, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 7638 bytes) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ambari-0004.test.com:35356 (size: 3.3 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 4, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 7638 bytes) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 3) in 68 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 1.0 in stage 3.0 (TID 4) in 48 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:42 INFO YarnClusterScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool 19/01/24 12:07:42 INFO DAGScheduler: ResultStage 3 (reduce at StandardProfiler.scala:46) finished in 0.104 s 19/01/24 12:07:42 INFO DAGScheduler: Job 1 finished: reduce at StandardProfiler.scala:46, took 0.107718 s 19/01/24 12:07:42 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:42 INFO DAGScheduler: Got job 2 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:42 INFO DAGScheduler: Final stage: ResultStage 4 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:42 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:42 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:42 INFO DAGScheduler: Submitting ResultStage 4 (MapPartitionsRDD[12] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 18.8 KB, free 365.7 MB) 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 8.9 KB, free 365.7 MB) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on ambari-0004.test.com:35778 (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:42 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:42 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 4 (MapPartitionsRDD[12] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:42 INFO YarnClusterScheduler: Adding task set 4.0 with 2 tasks 19/01/24 12:07:42 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 5, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on ambari-0004.test.com:35356 (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO TaskSetManager: Starting task 1.0 in stage 4.0 (TID 6, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 5) in 49 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 6) in 26 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:42 INFO YarnClusterScheduler: Removed TaskSet 4.0, whose tasks have all completed, from pool 19/01/24 12:07:42 INFO DAGScheduler: ResultStage 4 (histogram at HistogramStatistics.java:66) finished in 0.085 s 19/01/24 12:07:42 INFO DAGScheduler: Job 2 finished: histogram at HistogramStatistics.java:66, took 0.087182 s 19/01/24 12:07:42 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:42 INFO DAGScheduler: Got job 3 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:42 INFO DAGScheduler: Final stage: ResultStage 5 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:42 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:42 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:42 INFO DAGScheduler: Submitting ResultStage 5 (MapPartitionsRDD[13] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 19.5 KB, free 365.7 MB) 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 9.2 KB, free 365.7 MB) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on ambari-0004.test.com:35778 (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:42 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:42 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 5 (MapPartitionsRDD[13] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:42 INFO YarnClusterScheduler: Adding task set 5.0 with 2 tasks 19/01/24 12:07:42 INFO TaskSetManager: Starting task 0.0 in stage 5.0 (TID 7, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on ambari-0004.test.com:35356 (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO TaskSetManager: Starting task 1.0 in stage 5.0 (TID 8, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 5.0 (TID 7) in 48 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 1.0 in stage 5.0 (TID 8) in 24 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:42 INFO YarnClusterScheduler: Removed TaskSet 5.0, whose tasks have all completed, from pool 19/01/24 12:07:42 INFO DAGScheduler: ResultStage 5 (histogram at HistogramStatistics.java:66) finished in 0.081 s 19/01/24 12:07:42 INFO DAGScheduler: Job 3 finished: histogram at HistogramStatistics.java:66, took 0.083380 s 19/01/24 12:07:42 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:42 INFO DAGScheduler: Got job 4 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:42 INFO DAGScheduler: Final stage: ResultStage 6 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:42 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:42 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:42 INFO DAGScheduler: Submitting ResultStage 6 (MapPartitionsRDD[16] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 18.8 KB, free 365.6 MB) 19/01/24 12:07:42 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 8.9 KB, free 365.6 MB) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on ambari-0004.test.com:35778 (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:42 INFO SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:42 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 6 (MapPartitionsRDD[16] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:42 INFO YarnClusterScheduler: Adding task set 6.0 with 2 tasks 19/01/24 12:07:42 INFO TaskSetManager: Starting task 0.0 in stage 6.0 (TID 9, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on ambari-0004.test.com:35356 (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:42 INFO TaskSetManager: Starting task 1.0 in stage 6.0 (TID 10, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:42 INFO TaskSetManager: Finished task 0.0 in stage 6.0 (TID 9) in 47 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 6.0 (TID 10) in 30 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 6.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 6 (histogram at HistogramStatistics.java:66) finished in 0.087 s 19/01/24 12:07:43 INFO DAGScheduler: Job 4 finished: histogram at HistogramStatistics.java:66, took 0.096468 s 19/01/24 12:07:43 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:43 INFO DAGScheduler: Got job 5 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:43 INFO DAGScheduler: Final stage: ResultStage 7 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:43 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:43 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:43 INFO DAGScheduler: Submitting ResultStage 7 (MapPartitionsRDD[17] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 19.5 KB, free 365.6 MB) 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 9.2 KB, free 365.6 MB) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on ambari-0004.test.com:35778 (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:43 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:43 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 7 (MapPartitionsRDD[17] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:43 INFO YarnClusterScheduler: Adding task set 7.0 with 2 tasks 19/01/24 12:07:43 INFO TaskSetManager: Starting task 0.0 in stage 7.0 (TID 11, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on ambari-0004.test.com:35356 (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:43 INFO TaskSetManager: Starting task 1.0 in stage 7.0 (TID 12, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 0.0 in stage 7.0 (TID 11) in 37 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 7.0 (TID 12) in 24 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 7.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 7 (histogram at HistogramStatistics.java:66) finished in 0.069 s 19/01/24 12:07:43 INFO DAGScheduler: Job 5 finished: histogram at HistogramStatistics.java:66, took 0.071648 s 19/01/24 12:07:43 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:43 INFO DAGScheduler: Got job 6 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:43 INFO DAGScheduler: Final stage: ResultStage 8 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:43 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:43 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:43 INFO DAGScheduler: Submitting ResultStage 8 (MapPartitionsRDD[20] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 18.8 KB, free 365.6 MB) 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 8.9 KB, free 365.6 MB) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on ambari-0004.test.com:35778 (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:43 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:43 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 8 (MapPartitionsRDD[20] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:43 INFO YarnClusterScheduler: Adding task set 8.0 with 2 tasks 19/01/24 12:07:43 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 13, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on ambari-0004.test.com:35356 (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:43 INFO TaskSetManager: Starting task 1.0 in stage 8.0 (TID 14, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 0.0 in stage 8.0 (TID 13) in 49 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 8.0 (TID 14) in 23 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 8.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 8 (histogram at HistogramStatistics.java:66) finished in 0.082 s 19/01/24 12:07:43 INFO DAGScheduler: Job 6 finished: histogram at HistogramStatistics.java:66, took 0.084444 s 19/01/24 12:07:43 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:43 INFO DAGScheduler: Got job 7 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:43 INFO DAGScheduler: Final stage: ResultStage 9 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:43 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:43 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:43 INFO DAGScheduler: Submitting ResultStage 9 (MapPartitionsRDD[21] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 19.5 KB, free 365.5 MB) 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 9.2 KB, free 365.5 MB) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on ambari-0004.test.com:35778 (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:43 INFO SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:43 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 9 (MapPartitionsRDD[21] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:43 INFO YarnClusterScheduler: Adding task set 9.0 with 2 tasks 19/01/24 12:07:43 INFO TaskSetManager: Starting task 0.0 in stage 9.0 (TID 15, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on ambari-0004.test.com:35356 (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:43 INFO TaskSetManager: Starting task 1.0 in stage 9.0 (TID 16, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 0.0 in stage 9.0 (TID 15) in 52 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 9.0 (TID 16) in 26 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 9.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 9 (histogram at HistogramStatistics.java:66) finished in 0.088 s 19/01/24 12:07:43 INFO DAGScheduler: Job 7 finished: histogram at HistogramStatistics.java:66, took 0.091901 s 19/01/24 12:07:43 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:43 INFO DAGScheduler: Got job 8 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:43 INFO DAGScheduler: Final stage: ResultStage 10 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:43 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:43 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:43 INFO DAGScheduler: Submitting ResultStage 10 (MapPartitionsRDD[24] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 18.8 KB, free 365.5 MB) 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 8.9 KB, free 365.5 MB) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on ambari-0004.test.com:35778 (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:43 INFO SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:43 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 10 (MapPartitionsRDD[24] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:43 INFO YarnClusterScheduler: Adding task set 10.0 with 2 tasks 19/01/24 12:07:43 INFO TaskSetManager: Starting task 0.0 in stage 10.0 (TID 17, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on ambari-0004.test.com:35356 (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:43 INFO TaskSetManager: Starting task 1.0 in stage 10.0 (TID 18, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 0.0 in stage 10.0 (TID 17) in 41 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 10.0 (TID 18) in 28 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 10.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 10 (histogram at HistogramStatistics.java:66) finished in 0.089 s 19/01/24 12:07:43 INFO DAGScheduler: Job 8 finished: histogram at HistogramStatistics.java:66, took 0.091702 s 19/01/24 12:07:43 INFO SparkContext: Starting job: histogram at HistogramStatistics.java:66 19/01/24 12:07:43 INFO DAGScheduler: Got job 9 (histogram at HistogramStatistics.java:66) with 2 output partitions 19/01/24 12:07:43 INFO DAGScheduler: Final stage: ResultStage 11 (histogram at HistogramStatistics.java:66) 19/01/24 12:07:43 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:43 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:43 INFO DAGScheduler: Submitting ResultStage 11 (MapPartitionsRDD[25] at histogram at HistogramStatistics.java:66), which has no missing parents 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 19.5 KB, free 365.5 MB) 19/01/24 12:07:43 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 9.2 KB, free 365.5 MB) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on ambari-0004.test.com:35778 (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:43 INFO SparkContext: Created broadcast 11 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:43 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 11 (MapPartitionsRDD[25] at histogram at HistogramStatistics.java:66) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:43 INFO YarnClusterScheduler: Adding task set 11.0 with 2 tasks 19/01/24 12:07:43 INFO TaskSetManager: Starting task 0.0 in stage 11.0 (TID 19, ambari-0004.test.com, executor 1, partition 0, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on ambari-0004.test.com:35356 (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:43 INFO TaskSetManager: Starting task 1.0 in stage 11.0 (TID 20, ambari-0004.test.com, executor 1, partition 1, NODE_LOCAL, 8122 bytes) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 0.0 in stage 11.0 (TID 19) in 58 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:43 INFO TaskSetManager: Finished task 1.0 in stage 11.0 (TID 20) in 43 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:43 INFO YarnClusterScheduler: Removed TaskSet 11.0, whose tasks have all completed, from pool 19/01/24 12:07:43 INFO DAGScheduler: ResultStage 11 (histogram at HistogramStatistics.java:66) finished in 0.108 s 19/01/24 12:07:43 INFO DAGScheduler: Job 9 finished: histogram at HistogramStatistics.java:66, took 0.109655 s 19/01/24 12:07:45 INFO FileUtils: Creating directory if it doesn't exist: hdfs://ambari-0002.test.com:8020/model.db/tests/ffftest_copy_copy/profile/.hive-staging_hive_2019-01-24_12-07-45_142_6340309537996504312-1 19/01/24 12:07:45 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 19/01/24 12:07:45 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 19/01/24 12:07:45 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 19/01/24 12:07:45 INFO SparkContext: Starting job: sql at SparkContextService20.java:63 19/01/24 12:07:45 INFO DAGScheduler: Got job 10 (sql at SparkContextService20.java:63) with 2 output partitions 19/01/24 12:07:45 INFO DAGScheduler: Final stage: ResultStage 12 (sql at SparkContextService20.java:63) 19/01/24 12:07:45 INFO DAGScheduler: Parents of final stage: List() 19/01/24 12:07:45 INFO DAGScheduler: Missing parents: List() 19/01/24 12:07:45 INFO DAGScheduler: Submitting ResultStage 12 (MapPartitionsRDD[28] at sql at SparkContextService20.java:63), which has no missing parents 19/01/24 12:07:45 INFO MemoryStore: Block broadcast_12 stored as values in memory (estimated size 609.3 KB, free 364.9 MB) 19/01/24 12:07:45 INFO MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 180.0 KB, free 364.7 MB) 19/01/24 12:07:45 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on ambari-0004.test.com:35778 (size: 180.0 KB, free: 366.0 MB) 19/01/24 12:07:45 INFO SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1039 19/01/24 12:07:45 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 12 (MapPartitionsRDD[28] at sql at SparkContextService20.java:63) (first 15 tasks are for partitions Vector(0, 1)) 19/01/24 12:07:45 INFO YarnClusterScheduler: Adding task set 12.0 with 2 tasks 19/01/24 12:07:45 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 21, ambari-0004.test.com, executor 1, partition 0, PROCESS_LOCAL, 10314 bytes) 19/01/24 12:07:45 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on ambari-0004.test.com:35356 (size: 180.0 KB, free: 93.0 MB) 19/01/24 12:07:45 INFO TaskSetManager: Starting task 1.0 in stage 12.0 (TID 22, ambari-0004.test.com, executor 1, partition 1, PROCESS_LOCAL, 10213 bytes) 19/01/24 12:07:45 INFO TaskSetManager: Finished task 0.0 in stage 12.0 (TID 21) in 387 ms on ambari-0004.test.com (executor 1) (1/2) 19/01/24 12:07:45 INFO TaskSetManager: Finished task 1.0 in stage 12.0 (TID 22) in 85 ms on ambari-0004.test.com (executor 1) (2/2) 19/01/24 12:07:45 INFO DAGScheduler: ResultStage 12 (sql at SparkContextService20.java:63) finished in 0.537 s 19/01/24 12:07:45 INFO DAGScheduler: Job 10 finished: sql at SparkContextService20.java:63, took 0.540160 s 19/01/24 12:07:45 INFO YarnClusterScheduler: Removed TaskSet 12.0, whose tasks have all completed, from pool 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 226 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 108 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 172 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 134 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 32 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 221 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 94 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 269 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 250 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 268 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 304 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 197 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 196 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 35 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 159 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 51 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 238 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 274 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 283 19/01/24 12:07:46 INFO FileFormatWriter: Job null committed. 19/01/24 12:07:46 INFO FileFormatWriter: Finished processing stats for job null. 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_9_piece0 on ambari-0004.test.com:35778 in memory (size: 9.2 KB, free: 366.0 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_9_piece0 on ambari-0004.test.com:35356 in memory (size: 9.2 KB, free: 93.0 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 241 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 16 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 125 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 53 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 24 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 262 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 167 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 87 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 157 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 116 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 41 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 88 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 5 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 212 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 219 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 14 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 204 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 194 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 127 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 49 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 48 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 258 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 10 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 213 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 163 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 299 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 210 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 91 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 18 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_12_piece0 on ambari-0004.test.com:35778 in memory (size: 180.0 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_12_piece0 on ambari-0004.test.com:35356 in memory (size: 180.0 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 17 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 290 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 8 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 23 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 225 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 298 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 45 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 191 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 162 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 230 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 301 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 77 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 74 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 233 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 305 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 132 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 179 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 211 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 265 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 122 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 126 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 147 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 34 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 216 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 257 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 26 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 25 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 182 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 272 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 28 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 93 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ambari-0004.test.com:35778 in memory (size: 3.3 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ambari-0004.test.com:35356 in memory (size: 3.3 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 173 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 267 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 123 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 165 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 303 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 168 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 232 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 4 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 12 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 176 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 39 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 148 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 27 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 42 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 249 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 154 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 190 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 228 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 302 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 195 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 68 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 106 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 207 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 36 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 203 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 79 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 90 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 2 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 242 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 80 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 245 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 252 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 144 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 58 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 291 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 107 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 178 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 158 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 300 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 111 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_5_piece0 on ambari-0004.test.com:35356 in memory (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_5_piece0 on ambari-0004.test.com:35778 in memory (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 261 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 136 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 20 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 78 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 22 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 149 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 260 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 275 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 110 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 161 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 186 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 95 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 124 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 199 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 1 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 76 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 13 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 264 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 234 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 244 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ambari-0004.test.com:35778 in memory (size: 9.5 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ambari-0004.test.com:35356 in memory (size: 9.5 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 92 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 170 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 11 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 143 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 240 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 208 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 59 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 255 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 177 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 183 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 133 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 71 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 62 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 44 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 215 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 15 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 19 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 189 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 152 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 46 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 166 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 266 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_10_piece0 on ambari-0004.test.com:35778 in memory (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_10_piece0 on ambari-0004.test.com:35356 in memory (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_6_piece0 on ambari-0004.test.com:35778 in memory (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_6_piece0 on ambari-0004.test.com:35356 in memory (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 140 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 296 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 187 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 156 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 83 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 205 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 198 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 220 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 73 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 89 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 175 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 3 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 60 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 101 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 119 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 256 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 229 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 171 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 273 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 286 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 247 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 40 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 118 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 104 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 121 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 206 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 151 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 217 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 96 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 287 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 218 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 284 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 142 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 160 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 33 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 209 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 246 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 98 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 138 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 239 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 270 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 174 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 61 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 120 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 75 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 100 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 31 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 82 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 115 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 181 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 192 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 50 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 64 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 150 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 248 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 7 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_8_piece0 on ambari-0004.test.com:35778 in memory (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_8_piece0 on ambari-0004.test.com:35356 in memory (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 135 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 109 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 292 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_2_piece0 on ambari-0004.test.com:35778 in memory (size: 3.4 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_2_piece0 on ambari-0004.test.com:35356 in memory (size: 3.4 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 55 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 263 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 201 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 52 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 145 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 282 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_4_piece0 on ambari-0004.test.com:35778 in memory (size: 8.9 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_4_piece0 on ambari-0004.test.com:35356 in memory (size: 8.9 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 84 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 153 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 99 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 285 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 131 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 188 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 117 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 69 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 85 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 253 19/01/24 12:07:46 INFO ContextCleaner: Cleaned shuffle 0 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 193 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 236 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 137 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 6 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 169 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 184 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 289 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 65 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 155 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 200 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 72 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 102 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 70 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 271 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 81 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 164 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 67 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 146 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 214 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 223 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 243 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 139 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 29 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 54 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 237 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 38 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 295 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 112 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 97 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 227 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 180 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 224 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 57 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 103 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 30 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 259 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 293 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 66 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 56 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 105 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 185 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 202 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 222 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 231 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 294 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 288 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 113 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 37 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 63 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 128 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 43 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 235 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 21 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 141 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 9 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 130 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 297 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_11_piece0 on ambari-0004.test.com:35778 in memory (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_11_piece0 on ambari-0004.test.com:35356 in memory (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 251 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 114 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_7_piece0 on ambari-0004.test.com:35778 in memory (size: 9.2 KB, free: 366.2 MB) 19/01/24 12:07:46 INFO BlockManagerInfo: Removed broadcast_7_piece0 on ambari-0004.test.com:35356 in memory (size: 9.2 KB, free: 93.2 MB) 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 254 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 47 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 281 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 86 19/01/24 12:07:46 INFO ContextCleaner: Cleaned accumulator 129 19/01/24 12:07:46 INFO Hive: Renaming src: hdfs://ambari-0002.test.com:8020/model.db/tests/ffftest_copy_copy/profile/.hive-staging_hive_2019-01-24_12-07-45_142_6340309537996504312-1/-ext-10000/part-00000-47f2ce0f-c09c-4976-95cc-74c4b5a0ef9f-c000, dest: hdfs://ambari-0002.test.com:8020/model.db/tests/ffftest_copy_copy/profile/processing_dttm=1548302384572/part-00000-47f2ce0f-c09c-4976-95cc-74c4b5a0ef9f-c000, Status:true 19/01/24 12:07:46 INFO Hive: Renaming src: hdfs://ambari-0002.test.com:8020/model.db/tests/ffftest_copy_copy/profile/.hive-staging_hive_2019-01-24_12-07-45_142_6340309537996504312-1/-ext-10000/part-00001-47f2ce0f-c09c-4976-95cc-74c4b5a0ef9f-c000, dest: hdfs://ambari-0002.test.com:8020/model.db/tests/ffftest_copy_copy/profile/processing_dttm=1548302384572/part-00001-47f2ce0f-c09c-4976-95cc-74c4b5a0ef9f-c000, Status:true 19/01/24 12:07:46 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:07:51 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:07:51 INFO metastore: Connected to metastore. 19/01/24 12:07:51 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:07:56 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:07:56 INFO metastore: Connected to metastore. 19/01/24 12:07:56 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:01 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:01 INFO metastore: Connected to metastore. 19/01/24 12:08:01 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:06 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:06 INFO metastore: Connected to metastore. 19/01/24 12:08:06 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:11 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:11 INFO metastore: Connected to metastore. 19/01/24 12:08:11 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:16 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:16 INFO metastore: Connected to metastore. 19/01/24 12:08:16 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:21 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:21 INFO metastore: Connected to metastore. 19/01/24 12:08:21 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:26 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:26 INFO metastore: Connected to metastore. 19/01/24 12:08:26 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:31 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:31 INFO metastore: Connected to metastore. 19/01/24 12:08:31 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:36 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:36 INFO metastore: Connected to metastore. 19/01/24 12:08:36 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:41 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:41 INFO metastore: Connected to metastore. 19/01/24 12:08:41 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:46 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:46 INFO metastore: Connected to metastore. 19/01/24 12:08:46 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:51 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:51 INFO metastore: Connected to metastore. 19/01/24 12:08:51 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:08:56 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:08:56 INFO metastore: Connected to metastore. 19/01/24 12:08:56 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:01 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:01 INFO metastore: Connected to metastore. 19/01/24 12:09:01 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:06 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:06 INFO metastore: Connected to metastore. 19/01/24 12:09:06 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:11 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:11 INFO metastore: Connected to metastore. 19/01/24 12:09:11 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:16 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:16 INFO metastore: Connected to metastore. 19/01/24 12:09:16 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:21 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:21 INFO metastore: Connected to metastore. 19/01/24 12:09:21 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:26 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:26 INFO metastore: Connected to metastore. 19/01/24 12:09:26 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:31 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:31 INFO metastore: Connected to metastore. 19/01/24 12:09:31 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:36 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:36 INFO metastore: Connected to metastore. 19/01/24 12:09:36 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:41 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:41 INFO metastore: Connected to metastore. 19/01/24 12:09:41 WARN RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) 19/01/24 12:09:46 INFO metastore: Trying to connect to metastore with URI thrift://ambari-0003.test.com:9083 19/01/24 12:09:47 INFO metastore: Connected to metastore. 19/01/24 12:09:47 ERROR Hive: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1949) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) Caused by: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) ... 46 more 19/01/24 12:09:47 WARN HiveClientImpl: HiveClient got thrift exception, destroying client and retrying (23 tries remaining) java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1884) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) ... 43 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1949) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) ... 45 more Caused by: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) ... 46 more 19/01/24 12:09:52 WARN HiveClientImpl: Deadline exceeded 19/01/24 12:09:52 INFO AnnotationConfigApplicationContext: Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@43fcf4ba: startup date [Thu Jan 24 12:07:24 CST 2019]; root of context hierarchy 19/01/24 12:09:52 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null); org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null); at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1884) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) ... 26 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1949) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) ... 45 more Caused by: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) ... 46 more 19/01/24 12:09:52 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null); at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694) at com.thinkbiganalytics.spark.SparkContextService20.sql(SparkContextService20.java:63) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToOutputTable(OutputWriter.java:146) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeResultToTable(OutputWriter.java:114) at com.thinkbiganalytics.spark.dataprofiler.output.OutputWriter.writeModel(OutputWriter.java:58) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.run(Profiler.java:96) at com.thinkbiganalytics.spark.dataprofiler.core.Profiler.main(Profiler.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1884) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1407) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:854) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) ... 26 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1949) at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1876) ... 45 more Caused by: org.apache.thrift.TApplicationException: Required field 'filesAdded' is unset! Struct:InsertEventRequestData(filesAdded:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_fire_listener_event(ThriftHiveMetastore.java:4182) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.fire_listener_event(ThriftHiveMetastore.java:4169) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.fireListenerEvent(HiveMetaStoreClient.java:1960) at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) at com.sun.proxy.$Proxy45.fireListenerEvent(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.fireInsertEvent(Hive.java:1947) ... 46 more ) 19/01/24 12:09:52 INFO SparkContext: Invoking stop() from shutdown hook 19/01/24 12:09:52 INFO AbstractConnector: Stopped Spark@17bc5a91{HTTP/1.1,[http/1.1]}{0.0.0.0:0} 19/01/24 12:09:52 INFO SparkUI: Stopped Spark web UI at http://ambari-0004.test.com:44759 19/01/24 12:09:52 INFO YarnAllocator: Driver requested a total number of 0 executor(s). 19/01/24 12:09:52 INFO YarnClusterSchedulerBackend: Shutting down all executors 19/01/24 12:09:52 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down 19/01/24 12:09:52 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices (serviceOption=None, services=List(), started=false) 19/01/24 12:09:52 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 19/01/24 12:09:52 INFO MemoryStore: MemoryStore cleared 19/01/24 12:09:52 INFO BlockManager: BlockManager stopped 19/01/24 12:09:52 INFO BlockManagerMaster: BlockManagerMaster stopped 19/01/24 12:09:52 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 19/01/24 12:09:52 INFO SparkContext: Successfully stopped SparkContext 19/01/24 12:09:52 INFO ShutdownHookManager: Shutdown hook called 19/01/24 12:09:52 INFO ShutdownHookManager: Deleting directory /hadoop/yarn/local/usercache/nifi/appcache/application_1547783609774_0221/spark-05e64e7e-1feb-40be-8054-bc2ba6366a43